Synthesizing multimodal utterances for conversational agents
β Scribed by Stefan Kopp; Ipke Wachsmuth
- Publisher
- John Wiley and Sons
- Year
- 2004
- Tongue
- English
- Weight
- 442 KB
- Volume
- 15
- Category
- Article
- ISSN
- 1546-4261
- DOI
- 10.1002/cav.6
No coin nor oath required. For personal study only.
π SIMILAR VOLUMES
## Abstract This paper describes a generic model for personality, mood and emotion simulation for conversational virtual humans. We present a generic model for updating the parameters related to emotional behaviour, as well as a linear implementation of the generic update mechanisms. We explore how
In this paper, an executable generic process model is proposed for combined verbal and non-verbal communication processes and their interaction. The agent-based architecture can be used to create multimodal interaction. The generic process model has been designed, implemented and used to simulate di
## Abstract People highlight the intended interpretation of their utterances within a larger discourse by a diverse set of nonβverbal signals. These signals represent a key challenge for animated conversational agents because they are pervasive, variable, and need to be coordinated judiciously in a