𝔖 Bobbio Scriptorium
✦   LIBER   ✦

Language model adaptation for fixed phrases by amplifying partial n-gram sequences

✍ Scribed by Tomoyosi Akiba; Katunobu Itou; Atsushi Fujii


Book ID
104591156
Publisher
John Wiley and Sons
Year
2007
Tongue
English
Weight
547 KB
Volume
38
Category
Article
ISSN
0882-1666

No coin nor oath required. For personal study only.

✦ Synopsis


Abstract

We propose a method for creating an N‐gram language model for use in a speech‐operated question‐answering system. We note that input questions to such a system frequently consist of an initial section, relating to the query topic, and a formulaic sentence final expression that is used in questions (a fixed phrase). While we are able to model the initial sections adequately using the target query newspaper corpus, we are not able to model the fixed phrases adequately with this data source. In this paper we frame the problem as one of adapting a language model created using a generic corpus to fixed phrases and propose a language model adaptation method that makes use only of a list of fixed phrases created by hand, rather than attempting the more difficult task of collecting an adaptation corpus. In the proposed method we determine which sections in the generic corpus correspond to N‐gram sequences on the list of fixed phrases, and perform language model adaptation by amplifying the probabilities of those N‐grams; this is equivalent to performing maximum a posteriori (MAP) estimation treating these partial N‐gram sequences from the generic corpus itself as posterior information. We perform recognition experiments with spoken questions consisting of input to a question‐answering system and confirm the effectiveness of the proposed method. © 2007 Wiley Periodicals, Inc. Syst Comp Jpn, 38(4): 63–73, 2007; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/scj.20142