𝔖 Bobbio Scriptorium
✦   LIBER   ✦

Multi-class composite N-gram language model

✍ Scribed by Hirofumi Yamamoto; Shuntaro Isogai; Yoshinori Sagisaka


Book ID
108410671
Publisher
Elsevier Science
Year
2003
Tongue
English
Weight
149 KB
Volume
41
Category
Article
ISSN
0167-6393

No coin nor oath required. For personal study only.


πŸ“œ SIMILAR VOLUMES


Multiclass composite N-gram language mod
✍ Hirofumi Yamamoto; Yoshinori Sagisaka πŸ“‚ Article πŸ“… 2003 πŸ› John Wiley and Sons 🌐 English βš– 530 KB

## Abstract The authors propose a method to generate a compact, highly reliable language model for speech recognition based on the efficient classification of words. In this method, the connectedness with the words immediately before and after the word is taken to represent separate attributes, and

Topic-Dependent-Class-Based -Gram Langua
✍ Naptali, W.; Tsuchiya, M.; Nakagawa, S. πŸ“‚ Article πŸ“… 2012 πŸ› Institute of Electrical and Electronics Engineers 🌐 English βš– 865 KB
Relevance weighting for combining multi-
✍ R. Iyer; M. Ostendorf πŸ“‚ Article πŸ“… 1999 πŸ› Elsevier Science 🌐 English βš– 155 KB

Standard statistical language modeling techniques suffer from sparse-data problems in tasks where large amounts of domain-specific text are not available. In this paper, we focus on improving the estimation of domain-dependent n-gram models by the selective use of out-of-domain text data. Previous a

A weighted average n-gram model of natur
✍ P. O'Boyle; M. Owens; F.J. Smith πŸ“‚ Article πŸ“… 1994 πŸ› Elsevier Science 🌐 English βš– 403 KB

A new \(n\)-gram model of natural language designed to aid speech recognition is presented in which the probabilities are calculated as a weighted average of maximum likelihood probabilities obtained from a training corpus. This simple approach produces a model that can be constructed quickly and is

Language model adaptation for fixed phra
✍ Tomoyosi Akiba; Katunobu Itou; Atsushi Fujii πŸ“‚ Article πŸ“… 2007 πŸ› John Wiley and Sons 🌐 English βš– 547 KB

## Abstract We propose a method for creating an N‐gram language model for use in a speech‐operated question‐answering system. We note that input questions to such a system frequently consist of an initial section, relating to the query topic, and a formulaic sentence final expression that is used i