## Abstract The authors propose a method to generate a compact, highly reliable language model for speech recognition based on the efficient classification of words. In this method, the connectedness with the words immediately before and after the word is taken to represent separate attributes, and
Multi-class composite N-gram language model
β Scribed by Hirofumi Yamamoto; Shuntaro Isogai; Yoshinori Sagisaka
- Book ID
- 108410671
- Publisher
- Elsevier Science
- Year
- 2003
- Tongue
- English
- Weight
- 149 KB
- Volume
- 41
- Category
- Article
- ISSN
- 0167-6393
No coin nor oath required. For personal study only.
π SIMILAR VOLUMES
Standard statistical language modeling techniques suffer from sparse-data problems in tasks where large amounts of domain-specific text are not available. In this paper, we focus on improving the estimation of domain-dependent n-gram models by the selective use of out-of-domain text data. Previous a
A new \(n\)-gram model of natural language designed to aid speech recognition is presented in which the probabilities are calculated as a weighted average of maximum likelihood probabilities obtained from a training corpus. This simple approach produces a model that can be constructed quickly and is
## Abstract We propose a method for creating an Nβgram language model for use in a speechβoperated questionβanswering system. We note that input questions to such a system frequently consist of an initial section, relating to the query topic, and a formulaic sentence final expression that is used i