Large Language Models in Machine Translation

   page       BibTeX_logo.png       attach   
Thorsten Brants, Ashok C. Popat, Peng Xu, Franz Josef Och, Jeffrey Dean
Jason Eisner (eds.)
Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL 2007), pages 858–867
ACL, Prague, Czech Republic
June 2007

This paper reports on the benefits of large-scale statistical language modeling in machine translation. A distributed infrastructure is proposed which we use to train on up to 2 trillion tokens, resulting in language models having up to 300 billion n-grams. It is capable of providing smoothed probabilities for fast, single-pass decoding. We introduce a new smoothing method, dubbed Stupid Backoff, that is inexpensive to train on large data sets and approaches the quality of Kneser-Ney Smoothing as the amount of training data increases.