Speech recognition using recurrent neural prediction model
β Scribed by Toru Uchiyama; Haruhisa Takahashi
- Book ID
- 104591140
- Publisher
- John Wiley and Sons
- Year
- 2003
- Tongue
- English
- Weight
- 1007 KB
- Volume
- 34
- Category
- Article
- ISSN
- 0882-1666
- DOI
- 10.1002/scj.1194
No coin nor oath required. For personal study only.
β¦ Synopsis
Abstract
The neural prediction model (NPM) proposed by Iso and Watanabe is a successful example of a speech recognition neural network with a high recognition rate. This model uses multilayer perceptrons for pattern prediction (not for pattern recognition), and achieves a recognition rate as high as 99.8% for speakerβindependent isolated words. This paper proposes a recurrent neural prediction model (RNPM), and a recurrent network architecture for this model. The proposed model very significantly reduces the size of the network, with as high a recognition rate as the original model, and with a high efficiency of learning, for speakerβindependent isolated words. Β© 2003 Wiley Periodicals, Inc. Syst Comp Jpn, 34(2): 100β107, 2003; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/scj.1194
π SIMILAR VOLUMES
A speech recogmzer ts developed usmg a layered feedforward neural network to implement speech-frame predwtlon. A Markov cham ts used to control changes in the network's wetght parameters. We postulate that speech recogmtion accuracy ts closely hnked to the capabthty of the predictive model m represe