Extended Version
Title: Extended Version
Research Question: How can we improve the performance of language models?
Methodology: The authors explored various techniques to improve language models, including caching, clustering, higher-order n-grams, skipping models, and sentence-mixture models. They examined each technique separately and in combination with others to determine the best variations or limits.
Results: The authors found that trigram caches have nearly twice the potential of unigram caches. They also found variations that work slightly better than traditional clustering, and examined the limits of n-gram models, showing that performance has plateaued by 5 to 7 grams. They compared different skipping techniques and found that the best performance is achieved at the 5-gram level. For sentence mixture models, they showed that mixtures of up to 64 sentence types can lead to improvements.
Implications: The authors' findings suggest that combining various techniques can lead to significant improvements in language model performance. This research can help improve the accuracy of language models in applications such as speech recognition, optical character recognition, and machine translation.
Link to Article: https://arxiv.org/abs/0108005v1 Authors: arXiv ID: 0108005v1