Chapter title | Real time multimodal interaction with animated virtual human |
---|
Authors | Jin, L. and Wen, Z. |
---|
Editors | Banissi, E., Burkhard, R.A., Ursyn, A., Zhang, J.J., Bannatyne, M., Maple, C., Cowell, A.J., Tian, G.Y. and Hou, M. |
---|
Abstract | This paper describes the design and implementation of a real time animation framework in which animated virtual human is capable of performing multimodal interactions with human user. The animation system consists of several functional components, namely perception, behaviours generation, and motion generation. The virtual human agent in the system has a complex underlying geometry structure with multiple degrees of freedom (DOFs). It relies on a virtual perception system to capture information from its environment and respond to human user's commands by a combination of non-verbal behaviours including co-verbal gestures, posture, body motions and simple utterances. A language processing module is incorporated to interpret user's command. In particular, an efficient motion generation method has been developed to combines both motion captured data and parameterized actions generated in real time to produce variations in agent's behaviours depending on its momentary emotional states. |
---|
Book title | Proceedings of the Information Visualization (IV'06) |
---|
Page range | 557-562 |
---|
Year | 2006 |
---|
Publisher | IEEE |
---|
Publication dates |
---|
Published | 2006 |
---|
Place of publication | Los Alamitos, USA |
---|
ISBN | 0769526020 |
---|
Digital Object Identifier (DOI) | https://doi.org/10.1109/IV.2006.88 |
---|
File | |
---|