𝔖 Bobbio Scriptorium
✦   LIBER   ✦

Synthesizing multimodal utterances for conversational agents

✍ Scribed by Stefan Kopp; Ipke Wachsmuth


Publisher
John Wiley and Sons
Year
2004
Tongue
English
Weight
442 KB
Volume
15
Category
Article
ISSN
1546-4261

No coin nor oath required. For personal study only.


πŸ“œ SIMILAR VOLUMES


Generic personality and emotion simulati
✍ Arjan Egges; Sumedha Kshirsagar; Nadia Magnenat-Thalmann πŸ“‚ Article πŸ“… 2004 πŸ› John Wiley and Sons 🌐 English βš– 419 KB

## Abstract This paper describes a generic model for personality, mood and emotion simulation for conversational virtual humans. We present a generic model for updating the parameters related to emotional behaviour, as well as a linear implementation of the generic update mechanisms. We explore how

An agent-based architecture for multimod
✍ CATHOLIJN M. JONKER; JAN TREUR; WOUTER C.A. WIJNGAARDS πŸ“‚ Article πŸ“… 2001 πŸ› Elsevier Science 🌐 English βš– 579 KB

In this paper, an executable generic process model is proposed for combined verbal and non-verbal communication processes and their interaction. The agent-based architecture can be used to create multimodal interaction. The generic process model has been designed, implemented and used to simulate di

Specifying and animating facial signals
✍ Doug DeCarlo; Matthew Stone; Corey Revilla; Jennifer J. Venditti πŸ“‚ Article πŸ“… 2004 πŸ› John Wiley and Sons 🌐 English βš– 235 KB

## Abstract People highlight the intended interpretation of their utterances within a larger discourse by a diverse set of non‐verbal signals. These signals represent a key challenge for animated conversational agents because they are pervasive, variable, and need to be coordinated judiciously in a