Efficient speech recognition using subvector quantization and discrete-mixture HMMS
✍ Scribed by V. Digalakis; S. Tsakalidis; C. Harizakis; L. Neumeyer
- Publisher
- Elsevier Science
- Year
- 2000
- Tongue
- English
- Weight
- 136 KB
- Volume
- 14
- Category
- Article
- ISSN
- 0885-2308
No coin nor oath required. For personal study only.
✦ Synopsis
This paper introduces a new form of observation distributions for hidden Markov models (HMMs), combining subvector quantization and mixtures of discrete distributions. Despite what is generally believed, we show that discrete-distribution HMMs can outperform continuous-density HMMs at significantly faster decoding speeds. Performance of the discrete HMMs is improved by using product-code vector quantization (VQ) and mixtures of discrete distributions. The decoding speed of the discrete HMMs is also improved by quantizing subvectors of coefficients, since this reduces the number of table lookups needed to compute the output probabilities. We present efficient training and decoding algorithms for the discrete-mixture HMMs (DMHMMs). Our experimental results in the air-travel information domain show that the high level of recognition accuracy of continuous-mixture-density HMMs (CDHMMs) can be maintained at significantly faster decoding speeds. Moreover, we show that when the same number of mixture components is used in DMHMMs and CDHMMs, the new models exhibit superior recognition performance.