Direct modelling of output context dependence in discriminative hidden Markov model
β Scribed by GuoDong Zhou
- Publisher
- Elsevier Science
- Year
- 2005
- Tongue
- English
- Weight
- 309 KB
- Volume
- 26
- Category
- Article
- ISSN
- 0167-8655
No coin nor oath required. For personal study only.
β¦ Synopsis
This paper proposes a discriminative HMM to directly model output context dependence. The discriminative HMM assumes mutual information independence in its output model that a ''hidden'' state is only dependent on the outputs and independent on other ''hidden'' states. As a result, it overcomes the output context independent assumption in the traditional generative HMM. In addition, a dynamic back-off modelling algorithm using constraint relaxation principle is proposed to resolve the data sparseness problem in the discriminative HMM due to the direct modelling of the output context dependence in its output model. The evaluations on part-of-speech tagging and phrase chunking show that the discriminative HMM can effectively capture the output context dependence through its output context dependent output model and the dynamic back-off modelling algorithm.
π SIMILAR VOLUMES
In this paper we present a training method and a network architecture for estimating context-dependent observation probabilities in the framework of a hybrid hidden Markov model (HMM)/multi layer perceptron (MLP) speaker-independent continuous speech recognition system. The context-dependent modelin
This paper describes, and evaluates on a large scale, the lattice based framework for discriminative training of large vocabulary speech recognition systems based on Gaussian mixture hidden Markov models (HMMs). This paper concentrates on the maximum mutual information estimation (MMIE) criterion wh