Intelligent Virtual Agents: 8th International Conference, IVA 2008, Tokyo, Japan, September 1-3, 2008, Proceedings (Lecture Notes in Computer Science, 5208) 3540854827, 9783540854821

This book constitutes the refereed proceedings of the 8th International Workshop on Intelligent Virtual Agents, IVA 2008

124 62 43MB

English Pages 574 [572] Year 2008

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Title Page
Preface
Organization
Table of Contents
The Relation between Gaze Behavior and the Attribution of Emotion: An Empirical Study
Introduction
Related Work
Expressive Gaze Model
Gaze Warping Transformation
Procedural Model of Eye Movement
Approach
Structure of Model
Motion Capture Collection
Animation Generation
Category Formation
Emotional Attribution Experiment
Results
Dimensional Results
Categorical Results
Discussion
Conclusion
References
Affect Simulation with Primary and Secondary Emotions
Introduction and Motivation
Background and Related Work
Psychological and Neurobiological Background
Related Work
The WASABI Architecture
Nine Primary Emotions
Three Secondary Emotions
Emotion Dynamics and Awareness Likelihood
Connecting Cognition and Embodiment
Discussion and Conclusion
User Study of AffectIM, an Emotionally Intelligent Instant Messaging System
Introduction and Motivation
Rule-Based Approach to Affect Sensing from Text
Instant Messaging Application Integrated with the AffectAnalysis Model
User Study of the AffectIM System
Experimental Design, Subjects and Procedu
Analysis of Results
Discussion and Conclusions
References
Expressions of Empathy in ECAs
Introduction
Architecture of Empathic Agent
Evaluation
Scenario Set-Up
Procedure
Results
Discussion
Conclusions
Archetype-Driven Character Dialogue Generation forInteractive Narrative
Introduction
Related Work
CRYSTAL ISLAND Narrative-Centered Learning Environment
Archetype-Driven Character Dialogue Generation
Archetype-Based Character Representation
Probabilistic Unification Grammars for Character Dialogue Generation
An Illustrative Scenario
Conclusions and Future Work
References
Towards a Narrative Mind: The Creation of Coherent Life Stories for Believable Virtual Agents
Introduction
Related Work
Modelling Long-Term Memory
Narrative Agents and Autobiographic Memory
Narrative Autobiographic Memory for IVAs
Design and On-Going Implementation
Preliminary Results
Evaluation Plan
Conclusion and Future Work
Emergent Narrative as a Novel Framework for Massively Collaborative Authoring
Introduction
Authoring Emergent Narratives
Unpredictability: The Conceptual Authoring Challenge
Knowledge Representation: The Technical Authoring Challenge
Rehearsal Based Authoring
Massively Collaborative Authoring
Other Collaborative Authoring Projects
Advantages of Collaboration
Avoiding Incoherence
Ignorance Is Useful
Conclusion
Future Work
Towards Real-Time Authoring of Believable Agents in Interactive Narrative
Introduction
Authoring Interactive Narrative
Authoring Believable Worlds: A Question of Language
Authoring Believable Agents
The Cinematic Authoring Process
Technical Discussion
Authoring Interoperability
Heterogeneous Collaboration
Narratoria in Production, Preliminary Results
Related Work
Future Work
References
Male Bodily Responses during an Interaction with a Virtual Woman
Introduction
Related Work
Methodology
Body Movement Annotations
Independent and Explanatory Variables
Assessing Body Movements
Results
Discussion
Conclusion and Future Work
A Virtual Agent for a Cooking Navigation System Using Augmented Reality
Introduction
Related Works
SystemOverview
State Recognition of Cooking
Sensing Environment for the State Recognition
State and Transition Model of Cooking
The Virtual Agent
A Wearable Camera and Markers for the Agent
The User Interface
Demonstration
Conclusion and Future Works
Social Perception and Steering for Online Avatars
Introduction
Related Work
Automating Avatar Control
Dynamic Group Behavior
Approach
Underlying Social Theory
The Social Simulation and Visualization Platform
Social Situations and Activation of Steering Behaviors
Conversation Force Field
Conversation Unit
Conclusions and Future Work
Multi-party, Multi-issue, Multi-strategy Negotiation for Multi-modal Virtual Agents
Introduction
Multi-modal Multi-party Dialogue Model
Multi-party Negotiation Strategies
Implementing the Strategies
Factors in Strategy Selection
Multi-issue Utilities
Models of Other Agents
Choosing Strategies
Dialogue Realizations of Strategies
The SASO-EN Three-Party Negotiation Domain
Evaluation
Limitations, Related and Future Work
A Granular Architecture for Dynamic Realtime Dialogue
Introduction
Related Work
Theoretical Underpinnings
The Architecture
Generating Dialogues for Virtual Agents Using
Nested Textual Coherence Relations
Introduction
The Text2Dialogue System
Text Analysis Component
Mapper Component
Assigning Turns to Speakers
Preliminary Evaluation
Conclusions
Integrating Planning and Dialogue in a Lifestyle Agent
Introduction
Reasoning on User Activities with HTN Planning
Ad Hoc Dialogue Control Generation
Refining and Updating the Activity Model through Dialogue
Integrating Planning and Dialogue
Results and Discussion
Relationship to Previous Work
Conclusions
References
Audio Analysis of Human/Virtual-Human Interaction
Introduction
A Signal Processing Approach to H-VH Audio Analysis
User Study
Conclusions and Future
Learning Smooth, Human-Like Turntaking in Realtime Dialogue
Introduction
Related Work
System Architecture
Perception
Deciders
The Learner
Evaluation Setup
Parameter Settings
Results from System Interacting with Self
Comparison to Human-Human Interaction Study
Conclusions and Future Work
Predicting Listener Backchannels: A Probabilistic Multimodal Approach
Introduction
Previous Work
PredictionModel
Encoding Dictionary
Automatic Feature Selection
Generating Listener Backchannel
Experiments
Data Collection
Speaker Features and Listener Backchannels
Methodology
Results and Discussion
Conclusion
IGaze: Studying Reactive Gaze Behavior in Semi-immersive Human-Avatar Interactions
Introduction
TheIGazeSystem
Gaze Strategies
Experiment
Pilot Study
Main Study
Discussion
Conclusion
Estimating User’s Conversational Engagement Based
on Gaze Behaviors
Introduction
Related Research
Wizard-of-Oz Experiment to Collect Verbal/Nonverbal Data
Experiment
Verbal/Nonverbal Data
Analysis
User-Adapted Engagement Estimation
Evaluation
Conclusion and Future Work
References
The Effects of Agent Nonverbal Communication on Procedural and Attitudinal Learning Outcomes
Introduction
Method
Results
Discussion
References
Evaluating Data-Driven Style Transformation for Gesturing Embodied Agents
Introduction
Related Work
Style Transformation Overview
Perceptive Evaluation of Style Transformation
Experimental Setup
Hypothesis
Results
Discussion
Conclusion and Perspectives
Culture-Specific First Meeting Encounters between Virtual Agents
Introduction
Related Work
Combining an Empirical and a Theoretical Approach
Empirical Approach
Theoretical Approach
Implementing Culture-Specific First Meeting Encounters
Behavior Planning
Behavior Selection and Modification
TTS
Scheduler
Evaluation
Conclusion
Virtual Humans Elicit Skin-Tone Bias Consistent with Real-World Skin-Tone Biases
Introduction
Previous Work
Study Design
Population
Metrics
Procedure
Results
Discussion
Conclusions and Future Work
Cross-Cultural Evaluations of Avatar Facial Expressions Designed by Western Designers
Introduction
Experiment Overview
Experimental Procedure
Avatar and Expression Design
Results
Analysis of Average Matching Rates by Designs
Analysis of Expressions by Recognition Accuracy
Discussion
Conclusion
References
Agreeable People Like Agreeable Virtual Humans
Introduction
Related Work and Research Questions
Experimental Design$^1$
Participants and Procedure
Equipment
Measurements
Response Variables
Explanatory Variable
Results
Correlations between the Personality Traits and Self-reported Rapport
Correlations between the Personality Traits and Behavioral Measures of Rapport
Conclusions
References
A Listening Agent Exhibiting Variable Behaviour
Introduction
Sensitive Artificial Listener
Background
Personal Tendencies in Behaviour
Listener’s Behaviour
Related Work
Overall System Description
Agent Definition
Backchannel Generator
Listener Signal Selection
Conclusion and Future Wor
The Next Step towards a Function Markup Language
Introduction
Towards an Inventory of FML Tags
Legacy
Tags Proposed
Issues
The Role of Context
Intentions
Communicative Actions
Architecture
Conclusion
Extending MPML3D to Second Life
Introduction
Related Work
Second Life Overview
Infrastructure
Programming Interfaces
Entities and Resources
MPML3D Framework with SL-Extension
MPML3D Backend
Frontends
SL Frontend
Challenges in SL
Conclusions
An Extension of MPML with Emotion Recognition Functions Attached
Introduction
The MIML System
System Architecture
Tag Structure
Emotion Recognition Functions
Illustration
Conclusion
ITACO: Effects to Interactions by Relationships between Humans and Artifacts
Introduction
ITACOSystem
Constructions of the ITACO System
Experiment 1: Migrating to Robots
Design of an Experiment
Procedures
Evaluation Methods
Hypotheses and Predictions
Results and Discussions
Experiment 2: Migrating to Lamps
Design of an Experiment
Procedures
Evaluation Methods
Hypotheses and Predictions
Results and Discussions
Discussions and Conclusions
References
Teaching a Pet Robot through Virtual Games
Introduction
Related Work
Framework Design and Implementation
The Training Tasks
Advantages of Virtual Training Tasks
Selected Game Tasks
The Learning Method
Reward Recognition Learning
Reward Association Learning
Experiments
Results
Conclusion
Modeling Self-deception within a Decision-Theoretic Framework
Introduction
Self Deception Framework
Desired Belief Formulation
Belief Integration and Decision-Making
Simulation
Scenario Setup
Simulation Results and Discussion
Related Work
Future Work
Alternative Decision Models
Relaxed Formulation of Desired Belief
Conclusion
Modeling Appraisal in Theory of Mind Reasoning
Introduction
Related Work
ThespianAgent
Computational Model of Appraisal
Appraisal Process
Appraisal Dimensions
Sample Results
Scenario 1: Small Talk
Scenario 2: Firing-Squad
Discussion and Future Work
Conclusion
Improving Adaptiveness in Autonomous Characters
Introduction
ORIENT
Architectures
FAtiMA
PSI
FAtiMA-PSI: A Body-Mind Architecture
Related Work
Conclusion and Future Work
The Embodiment of a DUAL/AMBR Based Cognitive Model in the RASCALLI Multi-agent Platform
Introduction
The RASCALLI Architecture
The DUAL/AMBR Based Mind of RASCALLI
Knowledge Representations
Scenario and DUAL/AMBR Mechanisms
Reasoning and Transfer Mechanisms
Complementary Transfer and Evaluation Mechanism
Discussion and Conclusion
BDI Model-Based Crowd Simulation
Introduction
Approach
BDI Model-Based Crowd Simulation Demo Sy
Overview
Scenario
Architecture
Perceiver
Reasoner
Selector
Executor
Applying BDI Model to Our Demo System
Evaluation and Discussion
Performance Evaluation
Comparison of BDI Model with Other Approaches
Conclusion
The Mood and Memory of Believable Adaptable
Socially Intelligent Characters
Introduction
BASIC v2.0 Walkthrough
Model Validation
Conclusions
Visualizing the Importance of Medical Recommendations with Conversational Agents
Introduction and Rationale
System Overview and Architecture
The Document Engineering Environment
The Greta Platform
Related Work
Identifying the Rhetorical Strength of Recommendations
Mapping Rhetorical Structures onto MultimodalCommunicative Acts
Definition of Specific Expressions
XSL-Based Transformations and APML Generation
Example Results
An Application to Consensus Judgments of Recommendations’ Strength
Conclusions
References
Evaluation of Justina: A Virtual Patient with PTSD
Introduction and Background
Designing a Patient with Post Traumatic Stress Disorder
(PTSD)
Virtual Justina
PTSD Domain
Question / Response Categorization
Method
Participants
Setup
Process and Procedure
Measures
Data Analytics
Results and Evaluation
Assessment of the System
Assessment of Student Questions and the Students
Discussion of Results
Future Work
References
Elbows Higher! Performing, Observing and Correcting Exercises by a Virtual Trainer
Introduction
Modeling Physical Exercises
The Exercise Recordings Corpus
The Training Recordings Corpus
The Architecture of the RVT
Monitoring the User
Providing Feedback
Presenting on Time, in Sync
Preliminary Evaluation
References
A Virtual Therapist That Responds Empathically to Your Answers
Introduction
The Application
The Emotion Model of the Agent
The Resulting Website
Experiment
Discussion
References
IDEAS4Games: Building Expressive VirtualCharacters for Computer Games
Motivation
Poker Demonstrator
Poker Algorithms
Modeling of Interaction and Story
Authoring Dialog Content
Authoring Narrative Structure
Game Control
Modeling of Affect
Affect Computation
Authoring Affect
Expressive Synthetic Voices with Reliable Quality
Expressive Domain-Oriented Unit Selection
Robust HMM-Based Synthesis
Discussion and Conclusion
Context-Aware Agents to Guide Visitors in Museums
Introduction
Basic Approach
Background
Design Principles
Deployable Virtual Agent Platf
Agent Host
Context-Aware Agent Deployment
Context-Aware Virtual Agent
User-Preference Part
Annotation Part
Navigation Part
Current Status
Early Experience
Related
Work
Conclusion
Virtual Institutions: Normative EnvironmentsFacilitating Imitation Learning in Virtual Agents
Introduction
Virtual Institutions as Normative Virtual Worlds That
Enable Learning from the Behavior of the Inhabitants
Enabling Imitation Learning with Virtual Institutions
Conclusions and Future Work
Enculturating Conversational Agents Based on a
Comparative Corpus Study
Introduction
Analysis of Comparative Corpus
Enculturated Language-Trainer Agents
References
The Reactive-Causal Architecture:
Towards Development of Believable Agents$^\star$
Introduction
The Reactive-Causal Architecture
Conclusion
Gesture Recognition in Flow in the Context of Virtual Theater
Automatic Generation of Conversational Behavior for Multiple Embodied Virtual Characters: The Rules a
nd Models behind Our System
Introduction and Motivation
Behavior Generation
References
Implementing Social Filter Rules in a Dialogue Manager Using Statecharts
Introduction
Implementation
Towards Realistic Real Time Speech-Driven Facial
Animation
References
Avatar Customization and Emotions in MMORPGs
Introduction
Relevant Research
Avatar
Emotions and Avatars
Method
Results
Discussion
References
Impact of the Agent’s Localization in Human-Computer Conversational Int
eraction
Introduction
Method
Impact of the Agent’s Localization
Evolving Expression of Emotions in Virtual Humans Using Lights and Pixels
References
Motivations and Personality Traits in Decision-Making
Introduction
Motivations and Personality Traits
Conclusion
References
A Flexible Behavioral Planner in Real-Time
Introduction
Behavioral Planner
Conclusion
References
Face to Face Interaction with an Intelligent Virtual Agent: The Effect on Learning Tactical Picture Compilation
Introduction
Experiment
Results
Piavca: A Framework for Heterogeneous Interactionswith Virtual Characters
Creating and Scripting Second Life Bots Using MPML3D
Motivation
Example Scenario in SL
Piavca: A Framework for Heterogeneous Interactions with Virtual Characters
Interpersonal Impressions of Agents for Developing Intelligent Systems
Introduction
Experiment on Impressions of Replies from the Agent
Discussion
References
Comparing an On-Screen Agent with a Robotic Agent in Non-Face-to-Face Interactions
Introduction
Experiment
Experimental Setting
An On-Screen Agent and a Robotic Agent
Participants
Procedure
Results
Discussion and Conclusions
References
Sustainability and Predictability in a Lasting Human–Agent Interaction
Introduction
Methods
Human–Agent Interaction Experiments
Social Effects of Virtual Assistants. A Review of Empirical Results with Regard to Communication
References
SoNa: A Multi-agent System to Support Human Navigation in a Community, Based on Social Network Analysis
Motivation
Proposed System
Summary
Reference
Animating Unstructured 3D Hand Models
Introduction
Hand Skeleton Extraction and Hand Segmentation
Local Frame Based Hand Skinning and Animation
Conclusions
References
Verification of Expressiveness of Procedural Parameters for Generating Emotional Motions
Motivation and System Overview
Preliminary Experimental Results
References
Individualised Product Portrayals in the Usability of a 3D Embodied Conversational Agent in an eBanking Scenario
Introduction
Experiment Design and Procedure
Results
Conclusions
Multi-agent Negotiation System in Electronic Environments
Introduction
Conclusion
References
A Study of the Use of a Virtual Agent in an Ambient Intelligence Environment
Introduction
Evaluation of the Use of the Virtual Agent
Conclusions
Automatic Torso Engagement for GesturingCharacters
Introduction
Technical Overview
Modeling the Dynamics of Virtual Agent’s Social Relations
Proposal of an Artificial Emotion Expression System Based on External Stimulus and Emotional Elements
Introduction
Artificial Emotion Expression System (AES)
Conclusion
References
A Reactive Architecture Integrating an Associative Memory for Sensory-Driven Intelligent Behavior
Introduction
Experiments
Conclusion
Social Responses to Virtual Humans: Automatic Over-Reliance on the "Human" Category
References
Adaptive Self-feeding Natural Language Generator Engine
Introduction
Natural Language Generation Engine Architecture
Future Work
A Model of Motivation for Virtual-Worlds Avatars
Introduction
Recognising and Reacting to Other Avatar’s Motivation
Conclusions and Future Work
References
Towards Virtual Emotions and Emergence of Social Behaviour
References
Using Virtual Agents for the Teaching of Requirements Elicitation in GSD
Introduction
A Virtual Agent’s Behavior Selection by Using Actions for Focusing of Attention
Action for Focusing of Attention
Conclusion
References
Emergent Narrative and Late Commitment
Introduction
Late Commitment
UsageExamples
“I Would Like to Trust It but”Perceived Credibility of Embodied Social Agents: AProposal for a Research Framework
Introduction
References
Acceptable Dialogue Start Supporting Agent for Avatar-Mediated Multi-tasking Online Communication
Do You Know How I Feel? Evaluating Emotional Display of Primary and Secondary Emotions
Motivation and Description of Study
Results and Discussion
Comparing Emotional vs. Envelope Feedback for ECAs
References
Intelligent Agents Living in Social Virtual Environments – Bringing Max into Second Life
Introduction and Background
Implementation and Architecture
Max’s First Steps into Second Life – Observations
Author Index
Recommend Papers

Intelligent Virtual Agents: 8th International Conference, IVA 2008, Tokyo, Japan, September 1-3, 2008, Proceedings (Lecture Notes in Computer Science, 5208)
 3540854827, 9783540854821

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Lecture Notes in Artificial Intelligence Edited by R. Goebel, J. Siekmann, and W. Wahlster

Subseries of Lecture Notes in Computer Science

5208

Helmut Prendinger James Lester Mitsuru Ishizuka (Eds.)

Intelligent Virtual Agents 8th International Conference, IVA 2008 Tokyo, Japan, September 1-3, 2008 Proceedings

13

Series Editors Randy Goebel, University of Alberta, Edmonton, Canada Jörg Siekmann, University of Saarland, Saarbrücken, Germany Wolfgang Wahlster, DFKI and University of Saarland, Saarbrücken, Germany Volume Editors Helmut Prendinger National Institute of Informatics Digital Contents and Media Sciences Research Division 2-1-2 Hitotsubashi, Chiyoda-ku, 101-8430 Tokyo, Japan E-mail: [email protected] James Lester North Carolina State University, Department of Computer Science Raleigh, NC 27695, USA E-mail: [email protected] Mitsuru Ishizuka University of Tokyo, Department of Information and Communication Engineering 7-3-1, Hongo, Bunkyo-ku, Tokyo 113-8656, Japan E-mail: [email protected]

Library of Congress Control Number: 2008933373

CR Subject Classification (1998): I.2.11, H.5.1, H.4, K.3 LNCS Sublibrary: SL 7 – Artificial Intelligence ISSN ISBN-10 ISBN-13

0302-9743 3-540-85482-7 Springer Berlin Heidelberg New York 978-3-540-85482-1 Springer Berlin Heidelberg New York

This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. Springer is a part of Springer Science+Business Media springer.com © Springer-Verlag Berlin Heidelberg 2008 Printed in Germany Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper SPIN: 12464449 06/3180 543210

Preface

Welcome to the Proceedings of the 8th International Conference on Intelligent Virtual Agents, which was held on September 1–3, 2008 in Tokyo, Japan. Intelligent virtual agents (IVAs) are autonomous, graphically embodied agents in a virtual environment that are able to interact intelligently with human users, other IVAs, and their environment. The IVA conference series is the major annual meeting of the intelligent virtual agents community, attracting interdisciplinary minded researchers and practitioners from embodied cognitive modeling, artificial intelligence, computer graphics, animation, virtual worlds, games, natural language processing, and human–computer interaction. The origin of the IVA conferences dates from a successful workshop on Intelligent Virtual Environments held in Brighton, UK, at the 13th European Conference on Artificial Intelligence (ECAI 2008). This workshop was followed by a second one held in Salford in Manchester, UK in 1999. Subsequent events took place in Madrid, Spain in 2001, Irsee, Germany 2003 and Kos, Greece in 2005. Starting in 2006, IVA moved from being a biennial to an annual event and became a full-fledged international conference, held in Marina del Rey, California, USA in 2006, and Paris, France in 2007. From 2005, IVA also hosted the Gathering of Animated Lifelike Agents (GALA), an annual festival to showcase the latest animated lifelike agents created by university students and academic or industrial research groups. IVA 2008 was the first time that IVA was organized in Asia and we are happy to report that a large number of papers were submitted. IVA 2008 received 99 submissions from Europe, the Americas, and Asia. Out of 71 long paper submissions, only 18 were accepted, and an additional 22 were accepted as short papers, or as posters (30). Out of 23 short paper submissions, we could only accept 6. A further 15 were accepted as posters. In total, there were 28 short papers and 42 poster papers accepted. IVA 2008 was hosted by the National Institute of Informatics, Tokyo. We would like to thank the members of the Program Committee for providing insightful reviews, and the members of the Senior Program Committee for selecting the best papers. We thank Hugo Hernault for preparing the IVA 2008 website and Ulrich Apel for providing the content on Tokyo and the institute. A special thanks goes to two people: Shigeko Tokuda for her excellent work in preparing and running the conference, and Boris Brandherm for his superb and efficient management of the submitted papers and preparation of the proceedings. We would also like to mention that the continued and active support of the IVA Steering Committee at each stage of the conference preparation process is greatly appreciated.

VI

Preface

Last but not least, we would like to thank the authors for submitting their high-quality work to IVA 2008. We hope the readers will enjoy the papers in this volume, and look forward to future IVA conferences.

June 2008

Helmut Prendinger James Lester Mitsuru Ishizuka

Organization

Conference Chairs Helmut Prendinger James Lester Mitsuru Ishizuka

National Institute of Informatics, Japan North Carolina State University, USA The University of Tokyo, Japan

Senior Program Committee Elisabeth Andr´e Ruth Aylett Marc Cavazza Jonathan Gratch Stefan Kopp Jean-Claude Martin Patrick Olivier Catherine Pelachaud Seiji Yamada

University of Augsburg, Germany Heriot-Watt University, UK University of Teesside, UK University of Southern California, USA Bielefeld University, Germany LIMSI-CNRS, France University of Newcastle upon Tyne, UK University of Paris 8, INRIA, France National Institute of Informatics, Japan

Best Paper Chair W. Lewis Johnson

Alelo, Inc., USA

Publicity Chair Anton Nijholt

University of Twente, The Netherlands

Submissions Chair Boris Brandherm

National Institute of Informatics, Japan

Local Organizing Committee Helmut Prendinger Ulrich Apel Boris Brandherm Hugo Hernault Shigeko Tokuda Sebastian Ullrich

National Institute of Informatics (Local Organization Chair) National Institute of Informatics National Institute of Informatics The University of Tokyo National Institute of Informatics RWTH Aachen University

VIII

Organization

Invited Speakers Hiroshi Ishiguro Ted Tagami

Osaka University, Japan Millions of Us, USA

Program Committee Jan Allbeck Jens Allwood Norman Badler Jeremy Bailenson Amy Baylor Christian Becker-Asano Timothy Bickmore Gaspard Breton St´ephanie Buisine Felix Burkhardt Justine Cassell Jeffrey Cohn Ang´elica de Antonio Fiorella de Rosis Zhigang Deng Doron Friedman Patrick Gebhard Marco Gillies Dirk Heylen Tetsunari Inamura Katherine Isbister Ido Iurgel Kostas Karpouzis Michael Kipp Yasuhiko Kitamura Takanori Komatsu Nicole Kr¨ amer Brigitte Krenn

Michael Kruppa Steve Maddock Wenji Mao Andrew Marriott Stacy Marsella Hideyuki Nakanishi Yukiko Nakano Michael Neff Toyoaki Nishida Pierre-Yves Oudeyer Ana Paiva Igor Pandzic Maja Pantic Sylvie Pesty Christopher Peters Paolo Petta Paul Piwek Rui Prada Matthias Rehm Mark Riedl Brent Rossen Martin Rumpler Zs´ofia Ruttkay Marc Schr¨ oder Candy Sidner Daniel Thalmann Kristinn Th´orisson

IVA Steering Committee Ruth Aylett Jonathan Gratch Stefan Kopp Patrick Olivier Catherine Pelachaud

Heriot-Watt University, UK University of Southern California, USA Bielefeld University, Germany University of Newcastle upon Tyne, UK University of Paris 8, INRIA, France

Organization

Held in Cooperation with The American Association of Artificial Intelligence (AAAI) The European Association for Computer Graphics (EG) The Association for Computing Machinery (ACM) Special Interest Group on Artificial Intelligence (SIGART) Special Interest Group on Computer-Human Interaction (SIGCHI)

IX

Table of Contents

Emotion and Empathy The Relation between Gaze Behavior and the Attribution of Emotion: An Empirical Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Brent Lance and Stacy C. Marsella Affect Simulation with Primary and Secondary Emotions . . . . . . . . . . . . . Christian Becker-Asano and Ipke Wachsmuth User Study of AffectIM, an Emotionally Intelligent Instant Messaging System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alena Neviarouskaya, Helmut Prendinger, and Mitsuru Ishizuka Expressions of Empathy in ECAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Radoslaw Niewiadomski, Magalie Ochs, and Catherine Pelachaud

1 15

29 37

Narrative and Augmented Reality Archetype-Driven Character Dialogue Generation for Interactive Narrative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jonathan P. Rowe, Eun Young Ha, and James C. Lester

45

Towards a Narrative Mind: The Creation of Coherent Life Stories for Believable Virtual Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wan Ching Ho and Kerstin Dautenhahn

59

Emergent Narrative as a Novel Framework for Massively Collaborative Authoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Michael Kriegel and Ruth Aylett

73

Towards Real-Time Authoring of Believable Agents in Interactive Narrative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Martin van Velsen

81

Male Bodily Responses during an Interaction with a Virtual Woman . . . Xueni Pan, Marco Gillies, and Mel Slater A Virtual Agent for a Cooking Navigation System Using Augmented Reality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kenzaburo Miyawaki and Mutsuo Sano

89

97

Conversation and Negotiation Social Perception and Steering for Online Avatars . . . . . . . . . . . . . . . . . . . . Claudio Pedica and Hannes Vilhj´ almsson

104

XII

Table of Contents

Multi-party, Multi-issue, Multi-strategy Negotiation for Multi-modal Virtual Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . David Traum, Stacy C. Marsella, Jonathan Gratch, Jina Lee, and Arno Hartholt A Granular Architecture for Dynamic Realtime Dialogue . . . . . . . . . . . . . . Kristinn R. Th´ orisson and Gudny Ragna Jonsdottir Generating Dialogues for Virtual Agents Using Nested Textual Coherence Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hugo Hernault, Paul Piwek, Helmut Prendinger, and Mitsuru Ishizuka

117

131

139

Integrating Planning and Dialogue in a Lifestyle Agent . . . . . . . . . . . . . . . Cameron Smith, Marc Cavazza, Daniel Charlton, Li Zhang, Markku Turunen, and Jaakko Hakulinen

146

Audio Analysis of Human/Virtual-Human Interaction . . . . . . . . . . . . . . . . Harold Rodriguez, Diane Beck, David Lind, and Benjamin Lok

154

Nonverbal Behavior Learning Smooth, Human-Like Turntaking in Realtime Dialogue . . . . . . . Gudny Ragna Jonsdottir, Kristinn R. Th´ orisson, and Eric Nivel

162

Predicting Listener Backchannels: A Probabilistic Multimodal Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Louis-Philippe Morency, Iwan de Kok, and Jonathan Gratch

176

IGaze: Studying Reactive Gaze Behavior in Semi-immersive Human-Avatar Interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Michael Kipp and Patrick Gebhard

191

Estimating User’s Conversational Engagement Based on Gaze Behaviors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ryo Ishii and Yukiko I. Nakano

200

The Effects of Agent Nonverbal Communication on Procedural and Attitudinal Learning Outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Amy L. Baylor and Soyoung Kim

208

Evaluating Data-Driven Style Transformation for Gesturing Embodied Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alexis Heloir, Michael Kipp, Sylvie Gibet, and Nicolas Courty

215

Models of Culture and Personality Culture-Specific First Meeting Encounters between Virtual Agents . . . . . Matthias Rehm, Yukiko Nakano, Elisabeth Andr´e, and Toyoaki Nishida

223

Table of Contents

Virtual Humans Elicit Skin-Tone Bias Consistent with Real-World Skin-Tone Biases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Brent Rossen, Kyle Johnsen, Adeline Deladisma, Scott Lind, and Benjamin Lok Cross-Cultural Evaluations of Avatar Facial Expressions Designed by Western Designers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tomoko Koda, Matthias Rehm, and Elisabeth Andr´e

XIII

237

245

Agreeable People Like Agreeable Virtual Humans . . . . . . . . . . . . . . . . . . . . Sin-Hwa Kang, Jonathan Gratch, Ning Wang, and James H. Watt

253

A Listening Agent Exhibiting Variable Behaviour . . . . . . . . . . . . . . . . . . . . Elisabetta Bevacqua, Maurizio Mancini, and Catherine Pelachaud

262

Markup and Representation Languages The Next Step towards a Function Markup Language . . . . . . . . . . . . . . . . . Dirk Heylen, Stefan Kopp, Stacy C. Marsella, Catherine Pelachaud, and Hannes Vilhj´ almsson

270

Extending MPML3D to Second Life . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sebastian Ullrich, Klaus Bruegmann, Helmut Prendinger, and Mitsuru Ishizuka

281

An Extension of MPML with Emotion Recognition Functions Attached . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xia Mao, Zheng Li, and Haiyan Bao

289

Architectures for Robotic Agents ITACO: Effects to Interactions by Relationships between Humans and Artifacts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kohei Ogawa and Tetsuo Ono Teaching a Pet Robot through Virtual Games . . . . . . . . . . . . . . . . . . . . . . . Anja Austermann and Seiji Yamada

296 308

Cognitive Architectures Modeling Self-deception within a Decision-Theoretic Framework . . . . . . . Jonathan Y. Ito, David V. Pynadath, and Stacy C. Marsella

322

Modeling Appraisal in Theory of Mind Reasoning . . . . . . . . . . . . . . . . . . . . Mei Si, Stacy C. Marsella, and David V. Pynadath

334

Improving Adaptiveness in Autonomous Characters . . . . . . . . . . . . . . . . . . Mei Yii Lim, Jo˜ ao Dias, Ruth Aylett, and Ana Paiva

348

XIV

Table of Contents

The Embodiment of a DUAL/AMBR Based Cognitive Model in the RASCALLI Multi-agent Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stefan Kostadinov and Maurice Grinberg BDI Model-Based Crowd Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kenta Cho, Naoki Iketani, Masaaki Kikuchi, Keisuke Nishimura, Hisashi Hayashi, and Masanori Hattori The Mood and Memory of Believable Adaptable Socially Intelligent Characters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mark Burkitt and Daniela M. Romano

356 364

372

Agents for Healthcare and Training Visualizing the Importance of Medical Recommendations with Conversational Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gersende Georg, Marc Cavazza, and Catherine Pelachaud Evaluation of Justina: A Virtual Patient with PTSD . . . . . . . . . . . . . . . . . Patrick Kenny, Thomas D. Parsons, Jonathan Gratch, and Albert A. Rizzo Elbows Higher! Performing, Observing and Correcting Exercises by a Virtual Trainer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zs´ ofia Ruttkay and Herwin van Welbergen A Virtual Therapist That Responds Empathically to Your Answers . . . . Matthijs Pontier and Ghazanfar F. Siddiqui

380 394

409 417

Agents in Games, Museums and Virtual Worlds IDEAS4Games: Building Expressive Virtual Characters for Computer Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Patrick Gebhard, Marc Schr¨ oder, Marcela Charfuelan, Christoph Endres, Michael Kipp, Sathish Pammi, Martin Rumpler, and Oytun T¨ urk Context-Aware Agents to Guide Visitors in Museums . . . . . . . . . . . . . . . . . Ichiro Satoh Virtual Institutions: Normative Environments Facilitating Imitation Learning in Virtual Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Anton Bogdanovych, Simeon Simoff, and Marc Esteva

426

441

456

Posters Enculturating Conversational Agents Based on a Comparative Corpus Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Afia Akhter Lipi, Yuji Yamaoka, Matthias Rehm, and Yukiko I. Nakano

465

Table of Contents

The Reactive-Causal Architecture: Towards Development of Believable Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ali Orhan Aydın, Mehmet Ali Orgun, and Abhaya Nayak Gesture Recognition in Flow in the Context of Virtual Theater . . . . . . . . Ronan Billon, Alexis N´ed´elec, and Jacques Tisseau Automatic Generation of Conversational Behavior for Multiple Embodied Virtual Characters: The Rules and Models behind Our System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Werner Breitfuss, Helmut Prendinger, and Mitsuru Ishizuka Implementing Social Filter Rules in a Dialogue Manager Using Statecharts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jenny Brusk

XV

468 470

472

474

Towards Realistic Real Time Speech-Driven Facial Animation . . . . . . . . . Aleksandra Cerekovic, Goranka Zoric, Karlo Smid, and Igor S. Pandzic

476

Avatar Customization and Emotions in MMORPGs . . . . . . . . . . . . . . . . . . Shun-an Chung and Jim Jiunde Lee

479

Impact of the Agent’s Localization in Human-Computer Conversational Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Aur´elie Cousseau and Fran¸cois Le Pichon

481

Evolving Expression of Emotions in Virtual Humans Using Lights and Pixels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Celso de Melo and Jonathan Gratch

484

Motivations and Personality Traits in Decision-Making . . . . . . . . . . . . . . . Etienne de Sevin

486

A Flexible Behavioral Planner in Real-Time . . . . . . . . . . . . . . . . . . . . . . . . . Etienne de Sevin

488

Face to Face Interaction with an Intelligent Virtual Agent: The Effect on Learning Tactical Picture Compilation . . . . . . . . . . . . . . . . . . . . . . . . . . . Willem A. van Doesburg, Rosemarijn Looije, Willem A. Melder, and Mark A. Neerincx Creating and Scripting Second Life Bots Using MPML3D . . . . . . . . . . . . . Birgit Endrass, Helmut Prendinger, Elisabeth Andr´e, and Mitsuru Ishizuka Piavca: A Framework for Heterogeneous Interactions with Virtual Characters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Marco Gillies, Xueni Pan, and Mel Slater

490

492

494

XVI

Table of Contents

Interpersonal Impressions of Agents for Developing Intelligent Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kaoru Sumi and Mizue Nagata

496

Comparing an On-Screen Agent with a Robotic Agent in Non-Face-to-Face Interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Takanori Komatsu and Yukari Abe

498

Sustainability and Predictability in a Lasting Human–Agent Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Toshiyuki Kondo, Daisuke Hirakawa, and Takayuki Nozawa

505

Social Effects of Virtual Assistants. A Review of Empirical Results with Regard to Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nicole C. Kr¨ amer

507

SoNa: A Multi-agent System to Support Human Navigation in a Community, Based on Social Network Analysis . . . . . . . . . . . . . . . . . . . . . . Shizuka Kumokawa, Victor V. Kryssanov, and Hitoshi Ogawa

509

Animating Unstructured 3D Hand Models . . . . . . . . . . . . . . . . . . . . . . . . . . Jituo Li, Li Bai, and Yangsheng Wang Verification of Expressiveness of Procedural Parameters for Generating Emotional Motions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yueh-Hung Lin, Chia-Yang Liu, Hung-Wei Lee, Shwu-Lih Huang, and Tsai-Yen Li Individualised Product Portrayals in the Usability of a 3D Embodied Conversational Agent in an eBanking Scenario . . . . . . . . . . . . . . . . . . . . . . . Alexandra Matthews, Nicholas Anderson, James Anderson, and Mervyn Jack Multi-agent Negotiation System in Electronic Environments . . . . . . . . . . . Dorin Militaru A Study of the Use of a Virtual Agent in an Ambient Intelligence Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Germ´ an Montoro, Pablo A. Haya, Sandra Baldassarri, Eva Cerezo, and Francisco Jos´e Ser´ on

511

514

516

518

520

Automatic Torso Engagement for Gesturing Characters . . . . . . . . . . . . . . . Michael Neff

522

Modeling the Dynamics of Virtual Agent’s Social Relations . . . . . . . . . . . . Magalie Ochs, Nicolas Sabouret, and Vincent Corruble

524

Proposal of an Artificial Emotion Expression System Based on External Stimulus and Emotional Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Seungwon Oh and Minsoo Hahn

526

Table of Contents

XVII

A Reactive Architecture Integrating an Associative Memory for Sensory-Driven Intelligent Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . David Panzoli, Herv´e Luga, and Yves Duthen

528

Social Responses to Virtual Humans: Automatic Over-Reliance on the “Human” Category . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sung Park and Richard Catrambone

530

Adaptive Self-feeding Natural Language Generator Engine . . . . . . . . . . . . Jovan David Rebolledo M´endez and Kenji Nakayama

533

A Model of Motivation for Virtual-Worlds Avatars . . . . . . . . . . . . . . . . . . . Genaro Rebolledo-Mendez, David Burden, and Sara de Freitas

535

Towards Virtual Emotions and Emergence of Social Behaviour . . . . . . . . . Dirk M. Reichardt

537

Using Virtual Agents for the Teaching of Requirements Elicitation in GSD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Miguel Romero, Aurora Vizca´ıno, and Mario Piattini

539

A Virtual Agent’s Behavior Selection by Using Actions for Focusing of Attention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Haris Supic

541

Emergent Narrative and Late Commitment . . . . . . . . . . . . . . . . . . . . . . . . . Ivo Swartjes, Edze Kruizinga, Mari¨et Theune, and Dirk Heylen

543

“I Would Like to Trust It but” Perceived Credibility of Embodied Social Agents: A Proposal for a Research Framework . . . . . . . . . . . . . . . . . Federico Tajariol, Val´erie Maffiolo, and Gaspard Breton

545

Acceptable Dialogue Start Supporting Agent for Avatar-Mediated Multi-tasking Online Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Takahiro Tanaka, Kyouhei Matsumura, and Kinya Fujita

547

Do You Know How I Feel? Evaluating Emotional Display of Primary and Secondary Emotions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Julia Tolksdorf, Christian Becker-Asano, and Stefan Kopp

548

Comparing Emotional vs. Envelope Feedback for ECAs . . . . . . . . . . . . . . . Astrid von der P¨ utten, Christian Reipen, Antje Wiedmann, Stefan Kopp, and Nicole C. Kr¨ amer

550

Intelligent Agents Living in Social Virtual Environments – Bringing Max into Second Life . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Erik Weitnauer, Nick M. Thomas, Felix Rabe, and Stefan Kopp

552

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

555

The Relation between Gaze Behavior and the Attribution of Emotion: An Empirical Study Brent Lance and Stacy C. Marsella University of Southern California Information Sciences Institute 4676 Admiralty Way Suite 1001 Marina Del Rey, CA 90034 {lance,marsella}@isi.edu

Abstract. Real-time virtual humans are less believable than hand-animated characters, particularly in the way they perform gaze. In this paper, we provide the results of an empirical study that explores an observer’s attribution of emotional state to gaze. We have taken a set of low-level gaze behaviors culled from the nonverbal behavior literature; combined these behaviors based on a dimensional model of emotion; and then generated animations of these behaviors using our gaze model based on the Gaze Warping Transformation (GWT) [9], [10]. Then, subjects judged the animations displaying these behaviors. The results, while preliminary, demonstrate that the emotional state attributed to gaze behaviors can be predicted using a dimensional model of emotion; and show the utility of the GWT gaze model in performing bottom-up behavior studies. Keywords: Gaze, Nonverbal Behavior, Emotional Expression, Character Animation, Procedural Animation, Motion Capture, Posture, Virtual Agent.

1 Introduction Animated characters in feature films function at a high level of believability, appear to come alive, and successfully engage the film’s audience; as do characters in many video games, although arguably to a lesser extent. Unfortunately, virtual embodied agents struggle to achieve this goal. However, the animation methods, used to create the film and video game characters are expensive and time consuming, and only allow for limited interaction in dynamic environments, making them unsuitable for the development of virtual embodied agents. Instead, real-time animation systems that express believable behavior are necessary. Our specific interest is in gaze behavior, which is expressive not only in terms of where the gaze is directed, but also in how the gaze is performed, its physical manner. As such, the goal of this paper is to find a model that maps between emotion and the physical manner of gaze. The purpose of this mapping is to allow for the generation of believable, emotionally expressive gaze shifts in an interactive virtual human, while using the minimum amount of motion capture data necessary to maintain realistic physical gaze manner. We present an approach to this problem that consists of an exploratory empirical study of the mapping between a set of gaze behaviors and the emotional content H. Prendinger, J. Lester, and M. Ishizuka (Eds.): IVA 2008, LNAI 5208, pp. 1–14, 2008. © Springer-Verlag Berlin Heidelberg 2008

2

B. Lance and S.C. Marsella

attributed to gaze shifts performing those behaviors by observers. This study is similar to the “reverse engineering” approach used by Grammer et al. [7], to study emotional state and facial expression. In this context, “reverse engineering” is used to mean a bottom-up approach where nonverbal behavior expressions are generated through the combination of low-level physical behaviors, and then displayed to subjects who rate the expression on its emotional content. Specifically, Grammer et al. [7], use Poser to generate random facial expressions from the space of all possible combinations of FACS Action Units. Users then evaluated the resulting expressions using a circumplex model of emotion. Similarly, we found a model that describes the mapping between gaze behaviors and the attribution of emotion to gaze shifts displaying those behaviors by first determining how the model describes emotion. We used two different representations of emotion, a set of emotional categories, such as anger or fear, and the PAD dimensional model of emotion [13]. Then we determined the space of possible gazes and the physical manners which they perform. To do this, we have culled a set of low-level, composable gaze behaviors from the nonverbal behavior literature, such as a bowed head during gaze. We then generated all possible gazes allowed by our space of lowlevel behaviors using our model of expressive gaze manner based on the Gaze Warping Transformation (GWT) [9], [10]. We use this model because it is capable of displaying an arbitrary selection of gaze behaviors while directed towards an arbitrary target with a minimal library of motion capture data. In [9], we first described and evaluated the Gaze Warping Transformation (GWT) a method for producing emotionally expressive head and torso movement during gaze shifts. We then provided a neuroscience-based eye model, and integrated it with GWTs [10]. Finally, we collected data to determine what emotional states subjects attributed to animated characters displaying these behaviors during gaze shifts. As a result of this reverse engineering study, we were able to demonstrate that composition of these low-level gaze behaviors preserved the PAD dimensional ratings. These results, while promising, are still preliminary. However, the study clearly demonstrates the utility of the GWT as a research tool beyond generating animations, and points out several areas for future research. While these results have the most application to our GWT-based gaze model, any procedural gaze model with sufficient control over the animation curves used to generate gaze shifts should be able to take advantage of this mapping.

2 Related Work There have been many implementations of gazing behaviors in real-time applications such as embodied virtual agents. Several of these gaze implementations in virtual characters are based on communicative signals (e.g. [2], [16]). Other gaze models have been developed for agents that perform tasks in addition to dialog, such as [6], [17]. There are also models of resting gaze, which simulate eye behavior when the eye is not performing any tasks [4] [11]. Additionally, there are attention-based models of gaze that perform eye movements based on models of attention and saliency [18], [19].

The Relation between Gaze Behavior and the Attribution of Emotion

3

There are several trends which can be seen in these implementations of gaze. First, the models focus on when and where the character looks, not on how the gaze shift occurs. Second, these models, with few exceptions, focus on communicative or taskrelated gaze behaviors, not on how gaze reveals emotional state. In addition to the previous research on implementing models of nonverbal gazing behavior, there has been recent work focused on the manipulation of parameters describing the way in which general movement is performed. This concept is referred to as manner or style. This research can provide methods for manipulating the way in which movements are performed, or to obtain the style from one movement and transfer it to another [1], [3], [22]. This research was inspirational to the development of the Gaze Warping Transformation, but does not deal with the constraints specific to gaze movement, nor does it identify specific styles and their expressive meaning, which is the purpose of this study.

3 Expressive Gaze Model We used our previous work on gaze to generate the gaze shifts for this study. Our gaze model combines two parts: first, a parameterization called the Gaze Warping Transformation (GWT), that generates emotionally expressive head and torso movement during gaze shifts [9]. The GWT is a set of parameters that transforms an emotionally neutral gaze shift towards a target into an emotionally expressive gaze shift directed at the same target. A small number of GWTs can then produce gazes displaying varying emotional content directed towards arbitrary targets. The second part is a procedural model of eye movement based on stereotypical eye movements described in the visual neuroscience literature [10]. The procedural eye movement is layered framewise onto the GWT-generated body movement. Emotion is expressed using the GWT, while the procedural eye model ensures realistic motion. 3.1 Gaze Warping Transformation A Gaze Warping Transformation, or GWT, is found by obtaining two motion captures of gaze shifts directed from the same start point to the same target, one emotionally expressive, the other emotionally neutral, and finding a set of warping parameters that would convert the animation curve representing each degree of freedom in the emotionally neutral animation into the animation curve for the corresponding degree of freedom in the emotionally expressive movement [9]. This works by transforming the keyframes of an animation curve. The keyframes of an animation are a subset of that animation’s frames, such that the values of the motion curves for intermediate frames are found by interpolating between the keyframes. We select the keyframes for each gaze by aligning it to a “stereotypical” gaze shift with known keyframe locations [10]. The gazes are aligned using the ratio of movement that occurred by each frame to that throughout the entire curve [1]. The result of this is a set of keyframes x(t), defined as a set of value, frame pairs, (xi, ti). These keyframes are transformed to those of a new motion x’(t’), defined as the set of pairs (xi’, ti’) through the use of two functions [21]. The first function, given

4

B. Lance and S.C. Marsella

a frame in the emotional curve ti’, calculates the frame ti in the neutral motion curve to obtain the corresponding amplitude xi. For the GWT, we use the function ti=g(ti’) ,

(1)

g(ti’)=c(ti’)*(ti’-ti-1’) ,

(2)

where given a frame time in the emotional movement ti’, g(t) determines the corresponding frame ti in the neutral movement through a scaling parameter c(ti’), which scales the time span between two adjacent keyframes. The second function is x'(ti’)=x(ti)+b(ti) ,

(3)

where b(ti) is a spatial offset parameter that transforms the neutral curve amplitude x(ti) into the corresponding emotional amplitude x’(ti’). The final GWT is an m * n set of (c, b) pairs, where m is the number of degrees of freedom in the animated body, and n is the number of keyframes in the animation. As the GWT is based on a technique of simple geometric transformations [21], the generated animations can move outside the physical limits of a human body. To solve this, we use an inverse kinematics system implemented using nonlinear optimization. This system simulates a rigid skeleton, keeping our animated movement within the limits of the human body [10]. Table 1. List of Gaze Types Gaze Type Eye-Only Gaze Shift Eye-Head Gaze Shift Eye-Head-Body Gaze Shift Head-Only Movement Head-Body Movement

3.2 Procedural Model of Eye Movement In addition to the GWT, which describes head and torso movement during gaze shifts, we developed an integrated procedural model of eye movement [10]. This model of eye movement is based on the visual neuroscience literature, specifically on research describing the different movements eyes perform during gaze, and the way in which eye movement and head movement are integrated during gaze shifts [12]. It generates several classes of gaze shifts (Table 1) using the following building blocks: • Saccades. The saccade is a very rapid, highly-stereotyped eye movement which rotates the eye from its initial position directly to the target; • Vestibulo-Ocular Reflex (VOR). Through the VOR, the eyes rotate within their orbit so that the gaze maintains the same target while the head moves. It produces the Head-Only and Head-Body movements; and • Combined Eye-Head Movement. This is used to integrate eye movement and headtorso movement, and generates the Eye-Head and Eye-Head-Body gaze shifts;

The Relation between Gaze Behavior and the Attribution of Emotion

5

4 Approach We performed an empirical study to determine a preliminary mapping between a space of gaze behaviors and emotion attributed to the gaze behaviors by subjects. To obtain this mapping, we first selected appropriate emotional models and the space of gaze behaviors to map between. To determine the mapping between a particular gaze and the attribution of emotion to that gaze, we use a “reverse engineering” approach [7]. Specifically, we generate all the gazes allowed by our space of gaze behaviors, and collect data of subjects attributing emotion to these gaze shifts. 4.1 Structure of Model Selected Emotion Model. There are many potential models of emotion we could have mapped to the gaze behaviors. We selected two: the first is the PAD model [13]; a model of emotion that views emotion as a space described with a three dimensions: pleasure / displeasure, arousal / non-arousal, and dominance / submissiveness. The categories of emotion, such as anger or happiness, are represented in this model by subregions in the space defined by the emotional dimensions. For example, anger is defined as negative valence, high arousal, and high dominance, while fear is defined as negative valence, high arousal, and low dominance. We are also using a categorization of emotion to map gaze behaviors to a set of intuitive emotional descriptors. Rather than using an existing categorical model, this categorization is derived from observer responses to the animations. Table 2. Gaze Behaviors Hypothesized Behaviors Head Raised Head Bowed Faster Velocity Slower Velocity Torso Raised Torso Bowed

Selected Gaze Behavior. In addition to the emotional model, we had to determine a space of gaze behaviors, due to the lack of a descriptive set of known gaze behaviors analogous to the FACS system. We identified a set of “emotional behaviors” from the psychology and arts literature that are likely to be used to reveal emotional state. This set of behavior guidelines can be seen in Table 2. These guidelines are simplifications of the actual literature [5, 8]. Our guidelines are that users will view the character as more dominant when its head is turned upwards than when its head is turned downwards [15], that the perception of arousal is strongly related to velocity [9], and that vertical posture of the body will display emotional pleasure [20]. While there are many alternative gaze behaviors that could also be modeled using the GWT, such as subtle variations in dynamics, or wider variations on posture, this limited set provides a starting point for this research.

6

B. Lance and S.C. Marsella

4.2 Motion Capture Collection For the head and torso behaviors, we asked the actor to perform “raised,” “neutral,” and “bowed” versions of the behavior, and collected data from the resulting movement. We also collected “fast,” “neutral,” and “slow” velocity movements. However, the “raised” torso posture was indistinguishable from the neutral torso posture, due to the limitations of the motion tracking system we used, resulting in the set of physical behaviors shown in Table 3. All captured gaze shifts consisted of the desired behavior being displayed in a gaze aversion that started gazing straight ahead in a neutral position and posture, and ended gazing 30 degrees to the right displaying the intended gaze behavior. From this motion data, we produced eight behavior GWTs, one for each behavior listed in Table 3. We also collected motion capture of the different gaze types (Table 1), and produced GWTs for each gaze type as well as the gaze behaviors. The gaze types were captured as gaze aversions that began gazing straight ahead and ended gazing 30° to the right, and gaze attractions that began 30 degrees to the right and ended gazing straight ahead. This resulted in 10 GWTs – one aversive and one attractive gaze shift for each of the different types of gaze in Table 1. Table 3. Discretization of Gaze Behaviors Behavior Dimension Head Posture Torso Posture Movement Velocity

Possible Values Raised, Neutral, Bowed Neutral, Bowed Fast, Neutral, Slow

4.3 Animation Generation From these 8 GWTs representing the discretized physical behaviors (Table 3) and 10 GWTs representing the various gaze types (Table 1), we generated 150 animations for use in our empirical bottom-up study. We combined the gaze behaviors in Table 3 in all possible ways, leaving out combinations of a raised head with bowed torso due to the physical implausibility of the behavior, resulting in 15 total behavior combinations. Then, these combined gaze behaviors were applied to the 10 gaze type GWTs, resulting in 150 GWTs. Finally, to generate the animations, we applied these 150 GWTs to neutral gaze shifts, with the resulting output rendered using Maya. These animations can be seen at: http://www.isi.edu/~marsella/students/lance/iva08.html 4.4 Category Formation In order to determine the categories for our primary experiment, and obtain a picture of how well the animated gaze behaviors covered the emotional space defined by the emotion models, we performed a preliminary category formation study. Approach. 31 informally selected people each viewed 20 animations randomly selected from the set of 150 animations with no duplicates, giving us 620 views, or approximately 4 per animation, and provided an open-ended written response to the question “What emotional state is the character displaying?” We then categorized the affective responses based on the hierarchical model of emotion described in [14].

The Relation between Gaze Behavior and the Attribution of Emotion

7

Results. We used the hierarchical model as a sorting guideline, to divide the individual responses into ten categories (Table 4); for example categorizing “expression of contempt” as Contempt, or “terrified” as Fear. However, we utilized additional categories not described by the hierarchical model. After categorizing the responses, we then selected categories where at least one video had 50% of the subjects rate it with that category. We then discarded those categories that were related to attention, discarding responses such as “change in attention,” “displaying strong interest,” and “distracted.” Finally, we discarded the responses indicating “uncertainty,” as we were concerned that it would be applied when the subject was uncertain of the character’s state, not when the character was displaying uncertainty. Table 4. Emotional Categories Emotional Categories Anger Contempt Disbelief Excitement Fear Flirtatious Guilt Sadness Secretive Surprise

4.5 Emotional Attribution Experiment After selecting the low-level behaviors, generating the animations, and setting the emotional categories, we performed the empirical study. The animations were placed online, and subjects rated the animation in two ways: first by selecting the emotional category (Table 4) that most closely approximated the emotion that they perceived in the animation, and second by locating the animation’s perceived emotion along the emotional dimensions of the PAD model. One hundred subjects selected through social networking rated fifteen unique, randomly selected animations each, resulting in ten ratings for each of the 150 animations. Subjects rated the animation’s location within the PAD model by using five-point Likert scales to indicate their agreement with two statements representing each dimension, seen in Table 5. The Likert scales were 1 = Strongly Disagree, 2 = Disagree, 3 = N/A, 4 = Agree, 5 = Strongly Agree. Emotional categories and rating statements were displayed in random order. Table 5. Emotional Dimension Rating Scales Emotional Dimension High Dominance Low Dominance High Arousal Low Arousal High Valence Low Valence

Rating Statement The character is dominant. The character is submissive. The character is agitated. The character is relaxed. The character is pleased. The character is displeased.

8

B. Lance and S.C. Marsella

5 Results We uncovered the mapping between emotion models and physical behaviors, in order to answer the following questions: 1. How did the subjects rate gaze shifts containing the low-level gaze behaviors in Table 3 along the PAD dimensions? 2. Does composition of low-level gaze behaviors in Table 3 preserve the PAD dimensions? For example, if a gaze shift displays low Dominance and low Pleasure behaviors, are low Dominance and low Pleasure attributed to it? 3. How did the subjects rate gaze shifts containing the low-level gaze behaviors in Table 3 using the emotional categories in Table 4? While we had originally intended to find which of the 150 individual animations varied across emotional state, ten ratings per animation was too few to perform a reliable statistical analysis. Instead, we combined the gazes across gaze type (Table 1), giving us 50 ratings for each of the 15 combinations of gaze behaviors. 5.1 Dimensional Results How reliable were the dimensional ratings scales? Before exploring the dimensional results, we tested how well our dimensional rating scales measured the emotional dimensions they were intended to by calculating the correlation and Cronbach’s Alpha between each pair of rating scales from Table 5. The Pleased and inverted Displeased scales performed well. The correlation between the two was 0.615, and the standardized Alpha score indicating scale reliability was high, with α = 0.7610, (α > 0.7 is considered a reliable scale). Dominant and inverted Submission also did well, with a correlation of 0.6649, and a high Alpha (α = 0.7987). Therefore, we averaged Pleased and inverted Displeased into one Pleasure scale, and combined Dominant and inverted Submission into one Dominance scale. Correlations between the Dominance and Pleasure scales were low, (0.1569), indicating no overlap. However using the ratings of Relaxed and Agitated as a scale for Arousal was less reliable, as both correlation (0.3745) and Alpha (α = 0.5449) were low. In addition, correlations between Relaxed and Pleased (0.5353) and between Agitated and Displeased (0.4889) were higher than between Relaxed and Agitated. There are several possible explanations for this, and further research will be necessary to determine the actual reason, but for the remainder of this paper, we will be using the two scales separately as Relaxed and Agitated. ] As we used 5-point Likert scales, but only animated 3-point scales of physical behavior, we condensed the collected data into 3point scales by combining “Strongly Disagree” and “Disagree”, as well as “Strongly Agree” and “Agree”, leaving Neutral ratings unchanged. How did the subjects rate gaze shifts containing the low-level gaze behaviors in Table 3 along the PAD dimensions? To answer this question, we performed a series of MANOVAs (multivariate analysis of variance) and t-tests to determine whether or not the mean emotion dimensions ratings differed across to the low level behaviors found in Table 3. Results of this analysis can be seen in Table 6.

The Relation between Gaze Behavior and the Attribution of Emotion

9

Table 6. Significant Relationships between PAD Dimension and Gaze Behaviors Emotional Dimension High Dominance Low Dominance Relaxed Agitated High Pleasure Low Pleasure

Head Raised Bowed Non-Bowed Neutral Non-Neutral

Body Bowed Neutral Bowed

Velocity Fast Non-Fast Fast

Bowed Neutral

Four MANOVAs were performed, each with one dimension (Pleasure, Agitation, Relaxation, or Dominance) as the dependent variable, and Head Orientation, Torso Orientation, Velocity, and Subject as the independent variables, while testing for second degree factorial interaction between the independent variables. The MANOVA results for Dominance showed significant effects (N = 1500, DF = 18, F = 14.51, p < .001) for head orientation (F = 24.08, p < .001), torso orientation (F = 82.55, p < .001), and velocity (F = 7.38, p < .001), with a significant interaction between head and torso orientation (F = 6.47, p < .05). The t-tests showed clear differences between group means, with raised head corresponding to higher Dominance, and bowed head with lower. In addition the t-tests revealed that a bowed posture was rated higher than a neutral posture, and that the Dominance rating for fast was higher than for slow or for neutral (all significant to p < .01). The Relaxed results showed significant differences (N = 1500, DF = 18, F = 1.89, p < .05) across the torso orientation (F = 11.41, p < .001) and the velocity (F = 3.78, p < .05) , with a significant interaction effect between torso and velocity (F = 3.68, p < .05); and the t-tests revealed that a bowed body was rated more Relaxed than a neutral body (p < .01). However, the t-tests did not reveal useful information about the velocity, indicating that the significant difference found by the MANOVA was likely related to the interaction between torso and velocity. The MANOVA for Agitation found significant differences (N = 1500, DF = 18, F = 4.60, p < .001) across the head orientation (F = 19.61, p < .001), the velocity (F = 6.04, p < .01), and the subject (F = 17.12, p < .001), and a significant interaction effect between the head and the velocity (F = 7.17, p < .05). The t-tests showed that raised and neutral head were rated as significantly more Agitated than bowed head, and the rating for high velocity was higher than for slow or neutral (p < .05). Finally, the analysis revealed that Pleasure significantly differed (N = 1500, DF = 18, F = 5.93, p < .001) across both the vertical orientation of the head (F = 6.58, p < .05) and the torso (F = 77.57, p < .001), with no significant interaction effects. The ratings for Pleasure also differed significantly (F = 4.06, p < .05) across subject. T-tests (p < .01) showed that the Pleasure rating for a neutral head orientation was significantly higher than those for bowed and raised head orientations, and that a bowed posture was rated higher than a neutral posture. Does composition of low-level gaze behaviors preserve the PAD dimensions? In order to determine whether the low-level behaviors can be combined according to the PAD model of emotion, we performed a second analysis. We performed six MANOVAs, each using an emotional dimension (High Dominance, Low Dominance, Relaxed, Agitated, High Pleasure, and Low Pleasure) as the dependent variable. We

10

B. Lance and S.C. Marsella

then used the number of behaviors associated with that emotional dimension, and the subject as the two independent variables. This tested whether or not gaze shifts displaying different numbers of behaviors attributed to a specific emotional dimension would have different values attributed to them. The results of this analysis showed that mean attributed ratings for an emotional dimension increased as the number of gaze behaviors associated with that emotional dimension increased, as seen in Figure 1. This indicates that physical gaze behaviors, when combined according to PAD dimensions will be rated as predicted by the combined behaviors. The specific results for dominance show significant differences (N = 1500, DF = 6, F = 32.24, p < .01) across the number of both low (F = 14.17, p < .001) and high (F = 26.39, p < .001) dominance behaviors displayed in a gaze shift, and a significant interaction effect (F = 6.93, p < .01) between low and high dominance. T-tests showed that as the number of dominance gaze behaviors increased, the rating of dominance significantly increased (p < .01) for high dominance behaviors, and significantly decreased (p < .05) for low dominance behaviors. The MANOVA for Agitated revealed significant differences (N = 1500, DF = 3, F = 18.31, p < .001) across the number of behaviors displayed in a shift, and showed significant differences across subjects (F = 20.50, p < .001), with no interaction effects. T-tests demonstrated that gaze shifts with no Agitated behaviors were rated significantly less agitated than those with 1 or 2 Agitated behaviors (p < .01). Both low and high pleasure showed significant differences across the number of behaviors (N = 1500, DF = 3, F = 22.96, p < .001), although there were also significant differences across subjects (F = 4.87, p < .05), and no interaction effects. Subsequent t-tests showed that mean ratings of pleasure significantly differed (p < .01) across all numbers of pleasure-associated behaviors, and that as low pleasure behaviors increased, pleasure decreased and vice versa for high pleasure behaviors. As the relaxed dimension only had one behavior associated with it, no further testing was performed. 5.2 Categorical Results How did the subjects rate gaze shifts containing the low-level gaze behaviors in Table 3 using the emotional categories in Table 4? To answer this question, we generated a cross tabulation of the 15 combinations of gaze behaviors against the emotional categories (Table 4), and used Pearson’s chi squared (Χ2) test to examine relationships in the data. We then performed further tests on the residuals to determine which had significant differences. Table 7. Emotional Categories and Significantly Related Behavior Combinations Emotional Categories Contempt Excitement Fear Guilt Sadness

Significantly Related Behavior Combinations Head Raised, Body Neutral, Velocity Neutral Head Neutral, Body Bowed, Velocity Fast Head Neutral, Body Neutral, Velocity Neutral Head Neutral, Body Neutral, Velocity Slow Head Bowed, Body Neutral, Velocity Neutral Head Bowed, Body Neutral, Velocity Slow Head Bowed, Body Neutral, Velocity Fast Head Bowed, Body Neutral, Velocity Neutral

The Relation between Gaze Behavior and the Attribution of Emotion

11

Fig. 1. Plots of Mean Ratings vs. Number of Behaviors for Dominance, Pleasure and Agitation

Results of this analysis can be seen in Table 7. The Χ2 test showed that gaze combinations and emotional categories were not randomly related (N = 1500, DF = 126, Χ2 = 775.817, p < .01). The table rows show behavior combinations with a significant number (p < .05) of ratings for that emotional category. While only 5 of the 15 gaze behavior combinations had significant associations to emotional categories, it was clear through examination of the residuals that further analysis of the relationship between the emotional categories and the low-level behaviors from Table 3 could be useful. For example, while no individual gaze behavior combination was rated significantly high for Flirtatious, all gaze shifts with the bowed head behavior had more Flirtatious ratings than did the gaze shifts without bowed head. To examine this, we generated crosstabs of individual gaze behaviors against emotional categories, and performed additional Χ2 tests (Table 8). Table 8. Significant Relationships between Emotional Categories and Gaze Behaviors Emotional Category Contempt Excitement Fear Flirtatious Guilt Sadness Surprise

Head Raised

Bowed Bowed Bowed Neutral

Torso Neutral Bowed Neutral Neutral

12

B. Lance and S.C. Marsella

We found significant interactions between head vertical orientation and emotional categories, (N = 1500, DF = 18, Χ2 = 329.47, p < .001). Testing the residuals showed that the Contempt category was more likely (Χ2 = 70.35, p < .05) to be attributed to a gaze shift with the head raised behavior, while Flirtatious (Χ2 = 73.41, p < .01), Guilt (Χ2 = 81.33, p < .01), and Sadness (Χ2 = 42.51, p < .01) were all more likely to be attributed to bowed head gaze shifts. Finally, Surprise was significantly less likely (Χ2 = 55.30, p < .01) to be attributed to bowed head gazes. Anger, Disbelief, Excitement, Fear, and Secretive do not relate to head vertical orientation significantly. Torso posture was not randomly related to emotional category (N = 1500, DF = 9, Χ2 = 187.49, p < .001). Excitement was more likely to have a bowed torso (Χ2 = 62.94, p < .01), while Contempt (Χ2 = 24.24, p < .05), Fear (Χ2 = 29.19, p < .01), and Guilt (Χ2 = 19.88, p < .01) were attributed more often to neutral torso animations. We also found, despite our expectations, no strong relationships between the emotional categories and the velocity of the gaze using a crosstab of emotions by velocity. While, the chi squared test showed that emotional category and velocity are not randomly related (N = 1500, DF = 18, Χ2 = 42.36, p < .001), upon examination of the residuals, no emotional categories significantly differed across velocities.

6 Discussion As a result of this reverse engineering study, we were able to demonstrate that composition of low-level gaze behaviors in Table 3 preserved the PAD dimensions (Figure 1). In addition, this preservation of PAD dimensions through composition can even be extended to some emotional categories. For example, Guilt can be mapped into the PAD space as low Pleasure, low Arousal, and low Dominance. By combining the behaviors associated with low Dominance, low Pleasure, and Relaxation (Table 6) we generate a movement with a bowed head, neutral torso, and slow velocity. Table 8 reveals that Guilt is attributed to gaze shifts displaying a bowed head, neutral torso, and slow velocity, just as predicted by the PAD model. We also have a partial mapping between emotional categories and gaze behaviors. For example, in Table 7, we can see that the gaze categories Contempt, Excitement, Guilt, and Sadness are all clearly associated with specific combinations of gaze behaviors, although there is some overlap between Guilt and Sadness. In addition, through examining the relationships between individual low-level gaze behaviors and emotional categories (Table 8), we obtain reinforcement of this mapping, and additional information about how Flirtatious behavior is displayed as well. This indicates that virtual embodied agents with disparate models of emotion should be able to make use of this mapping between gaze behaviors and attributed emotional state. If the agent uses a categorical model of emotion with emotional categories beyond those used in this study, then by mapping those categories into the PAD space, appropriate gaze behavior may still be generated. A more thorough description of the low-level behavior components for gaze, similar to FACS for facial expression, would be very valuable to this type of research. While we determined our own space of gaze behaviors for this study, there are other possible ways to structure the gaze behavior space that may provide better results.

The Relation between Gaze Behavior and the Attribution of Emotion

13

7 Conclusion In this paper, we have provided the results of a reverse engineering study resulting in a preliminary mapping between gaze behaviors and emotional states that could be used with a variety of gaze or emotion models. In addition, we have shown that combining low-level behaviors associated with emotional dimensions in accordance with those dimensions generates a gaze shift that subjects attribute the combined emotional state to. These results, while promising, are still preliminary. However, this study demonstrates the utility of the GWT as a nonverbal behavior research tool, and points towards several directions for future research. Many of the results of the mapping were not surprising, such as the link between increased dominance and increased vertical orientation of the head. However, there were unexpected results; for example, the link between high Pleasure and a bowed forward body. This indicates the need for a broader selection of gaze behaviors to determine why these unexpected results occurred. This work would also benefit from a more complete exploration of the way in which emotional state is attributed to different combinations of head and eye behavior, as well as a real-time implementation of the gaze mapping in a virtual embodied agent for evaluation. Acknowledgements. Dr. Bosco Tjan, Dr. Skip Rizzo, and Tiffany Cole provided invaluable assistance, without which this paper would not have been possible. This work was sponsored by the U.S. Army Research, Development, and Engineering Command (RDECOM), and the content does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred.

References 1. Amaya, K., Bruderlin, A., Calvert, T.: Emotion From Motion. In: Proceedings of the Conference on Graphical Interface, pp. 222–229 (1996) 2. Bickmore, T., Cassell, J.: Social Dialogue with Embodied Conversational Agents. In: Bernsen, N. (ed.) Natural, Intelligent and Effective Interaction with Multimodal Dialogue Systems. Kluwer Academic Publishers, Dordrecht (2004) 3. Brand, M., Hertzmann, A.: Style Machines. In: Proceedings of SIGGRAPH 2000 (2000) 4. Deng, Z., Lewis, J.P., Neumann, U.: Automated Eye Motion Using Texture Synthesis. IEEE Computer Graphics and Applications 25(2) (2005) 5. Exline, R.: Visual Interaction: The Glances of Power and Preference. In: Weitz, S. (ed.) Nonverbal Communication: Readings with Commentary, Oxford University Press, Oxford (1974) 6. Gillies, M.F.P., Dodgson, N.A.: Eye Movements and Attention for Behavioral Animation. The Journal of Visualization and Computer Animation 13, 287–300 (2002) 7. Grammer, K., Oberzaucher, E.: Reconstruction of Facial Expressions in Embodied Systems. Mitteilungen, ZiF (2006) 8. Kleinke, C.: Gaze and Eye Contact: A Research Review. Psychological Bulletin 100(1) (1986) 9. Lance, B., Marsella, S.: Emotionally Expressive Head and Body Movement During Gaze Shifts. In: Pelachaud, C., et al. (eds.) IVA 2007. LNCS (LNAI), vol. 4722. Springer, Heidelberg (2007)

14

B. Lance and S.C. Marsella

10. Lance, B., Marsella, S.: A Model of Gaze for the Purpose of Emotional Expression in Virtual Embodied Agents. In: Padgham, Parkes, Müller, Parsons (eds.) Proc. of 7th Int. Conf. on Autonomous Agents and Multiagent Systems (AAMAS 2008) (2008) 11. Lee, S., Badler, J., Badler, N.: Eyes Alive. ACM Transactions on Graphics 21(3) (2002) 12. Leigh, R.J., Zee, D.: The Neurology of Eye Movements, 4th edn. Oxford Press (2006) 13. Mehrabian, A.: Silent Messages: Implicit Communication of Emotions and Attitudes, 2nd edn. Wadsworth Publishing Company (1981) 14. Metts, S., Bowers, J.: Emotion in Interpersonal Communication. In: Knapp, M., Miller, G. (eds.) Handbook of Interpersonal Communication, 2nd edn. Sage, Thousand Oaks (1994) 15. Mignault, A., Chaudhuri, A.: The Many Faces of a Neutral Face: Head Tilt and Perception of Dominance and Emotion. Journal of Nonverbal Behavior 27(2) (2003) 16. Pelachaud, C., Bilvi, M.: Modeling Gaze Behavior for Conversational Agents. In: Rist, T., Aylett, R., Ballin, D., Rickel, J. (eds.) IVA 2003. LNCS (LNAI), vol. 2792. Springer, Heidelberg (2003) 17. Rickel, J., Johnson, W.L.: Animated Agents for Procedural Training in Virtual Reality: Perception, Cognition, and Motor Control. Applied Art. Int. 13(4-5) (1999) 18. Peters, C., Pelachaud, C., Bevacqua, E., Mancini, M.: A Model of Attention and Interest Using Gaze Behavior. In: Panayiotopoulos, T., Gratch, J., Aylett, R.S., Ballin, D., Olivier, P., Rist, T. (eds.) IVA 2005. LNCS (LNAI), vol. 3661. Springer, Heidelberg (2005) 19. Picot, A., Bailly, G., Elisei, F., Raidt, S.: Scrutinizing Natural Scenes: Controlling the Gaze of an Embodied Conversational Agent. In: Pelachaud, C., et al. (eds.) IVA 2007. LNCS (LNAI), vol. 4722. Springer, Heidelberg (2007) 20. Schouwstra, S., Hoogstraten, H.: Head Position and Spinal Position as Determinants of Perceived Emotional State Perceptual and motor skills 81(22), 673–674 (1995) 21. Witkin, A., Popovic, Z.: Motion Warping. In: Proceedings of SIGGRAPH 1995 (1995) 22. Zhao, L., Badler, N.: Acquiring and Validating Motion Qualities from Live Limb Gestures. Graphical Models 67(1) (2005)

Affect Simulation with Primary and Secondary Emotions Christian Becker-Asano and Ipke Wachsmuth Faculty of Technology, University of Bielefeld, 33594 Bielefeld, Germany {cbecker,ipke}@techfak.uni-bielefeld.de

Abstract. In this paper the WASABI1 Affect Simulation Architecture is introduced, in which a virtual human’s cognitive reasoning capabilities are combined with simulated embodiment to achieve the simulation of primary and secondary emotions. In modeling primary emotions we follow the idea of “Core Affect” in combination with a continuous progression of bodily feeling in three-dimensional emotion space (PAD space), that is only subsequently categorized into discrete emotions. In humans, primary emotions are understood as onto-genetically earlier emotions, which directly influence facial expressions. Secondary emotions, in contrast, afford the ability to reason about current events in the light of experiences and expectations. By technically representing aspects of their connotative meaning in PAD space, we not only assure their mood-congruent elicitation, but also combine them with facial expressions, that are concurrently driven by the primary emotions. An empirical study showed that human players in the Skip-Bo scenario judge our virtual human MAX significantly older when secondary emotions are simulated in addition to primary ones.

1

Introduction and Motivation

Researchers in the field of Embodied Conversational Agents (ECAs) [8,29] build anthropomorphic systems, which are employed in different interaction scenarios, that afford communicative abilities of different style and complexity. As these agents comprise an increasing number of sensors as well as actuators together with an increase in expressive capabilities, they need to be able to recognize and produce social cues in face-to-face communication. One factor in social interaction is the ability to deal with the affective dimension appropriately. Therefore researchers in the growing field of “Affective Computing” [27] discuss ways to derive human affective states from all kinds of intrusive and non-intrusive sensors. With regard to the expressive capabilities of embodied agents the integration of emotional factors influencing bodily expressions is argued for. These bodily expressions include, e.g., facial expression, body posture, and voice inflection and all of them must be modulated in concert to synthesize coherent emotional behavior. 1

[W]ASABI [A]ffect [S]imulation for [A]gents with [B]elievable [I]nteractivity.

H. Prendinger, J. Lester, and M. Ishizuka (Eds.): IVA 2008, LNAI 5208, pp. 15–28, 2008. c Springer-Verlag Berlin Heidelberg 2008 

16

C. Becker-Asano and I. Wachsmuth

With the WASABI architecture we present our attempt to exploit different findings and conceptions of emotion psychology, neurobiology, and developmental psychology in a fully-implemented computational architecture. It is based on the simulation of an emotion dynamics in three-dimensional emotion space, which has proven beneficial to increase the lifelikeness and believability of our virtual human MAX in two different interaction scenarios [3]. With this emotion dynamics simulation, however, we are limited to a class of rather simple emotions, that are similar to Damasio’s conception of primary emotions. In the WASABI architecture we use our agent’s cognitive reasoning abilities to model the mood-congruent elicitation of secondary emotions as well. In the following section the psychological as well as neurobiological background is described with respect to the distinction of primary as well as secondary emotions. Subsequently we give an overview of related work in the field of Affective Computing. The WASABI architecture is presented in Section 3. We explain how nine primary emotions together with three secondary emotions namely the prospect-based emotions hope, fears-confirmed, and relief were integrated in such a way that their mood-congruent elicitation can be guaranteed. We conclude by discussing results of a first empirical study on the effect of secondary emotion simulation in the Skip-Bo card game scenario.

2

Background and Related Work

We will first introduce the psychological and neurobiological background before an overview of related work in affective human-computer interaction is given. 2.1

Psychological and Neurobiological Background

According to [33] the “psychological construct” labeled emotion can be broken up into the component of cognitive appraisal, the physiological component of activation and arousal, the component of motor expression, the motivational component, and the component of subjective feeling state. We follow the distinction of a cognitive appraisal component and a physiological component in the computational simulation of affect.2 We also account for the motivational component, because our agent’s reasoning capabilities are modeled according to the belief-desire-intention approach to modeling rational behavior [30]. We further believe, that we first need to realize the dynamic interaction between a cognitive and a physiological component before we can tackle the question of how to computationally realize a subjective feeling state. Recently, psychologists started to investigate “unconscious processes in emotions” [35] and Ortony at al. discuss levels of processing in “effective functioning” by introducing a distinction between “emotions” and “feelings” [26]. They understand feelings as “readouts of the brain’s registration of bodily conditions and changes” whereas “emotions are interpreted feelings” [26, p. 174] and propose 2

The motor expression component has been realized as well, but is not in the scope of this paper (see [3] for details).

Affect Simulation with Primary and Secondary Emotions

17

three different levels of information processing [26], which are compatible with Scherer’s three modes of representation [35]. According to Ortony and colleagues, lower-levels have to contribute in order to experience “hot” emotions such that “cold, rational anger” could be solely the product of the cognitive component “without the concomitant feeling components from lower levels.” [26, p. 197] A purely primitive feeling of fear, on the contrary, also lacks the necessary cognitive elaboration to become a full-blown emotion. These psychological considerations are compatible with LeDoux’s distinction of a low and a high road of fear elicitation in the brain [20] and Damasio’s assumption of bodily responses causing an “emotional body state” [9, p. 138] that is subsequently analyzed in the thought process. Two classes of emotions. Damasio’s neurobiological research on emotions suggests the distinction of at least two classes of emotions, namely, primary and secondary emotions [9]. The term “primary emotions” [9] refers to emotions which are supposed to be inate. They developed during phylogeny to support fast and reactive response behavior in case of immediate danger, i.e. basic behavioral response tendencies like “flight-or-fight” behaviors. In humans, however, the perception of the changed bodily state is combined with the object that initiated it resulting in a “feeling of the emotion” with respect to that particular object [9, p. 132]. Primary emotions are also understood as a prototypical emotion types which can already be ascribed to one year old children [10]. They are comparable to the concept of “Core Affect” [32,14], which is based on the assumption that emotions cannot be identified by distinct categories from the start. “Core Affect” is represented in two-dimensional emotion space of Pleasure/Valence and Arousal, which is sometimes extended by a third dimension [37,31,17] labeled “Control/Dominance/Power” of connotative meaning. Furthermore, Wundt claims that any emotion can be characterized as a continuous progression in such three-dimensional emotion space [37]. The degree of Dominance not only describes the experienced “control” over the emotion or the situational context, but can also belong to an agent’s personality traits. The three-dimensional abstract emotion space is often referred to as PAD space. Secondary emotions like “relief” or “hope” are assumed to arise from higher cognitive processes, based on an ability to evaluate preferences over outcomes and expectations. Accordingly, secondary emotions are acquired during ontogenesis through learning processes in the social context. Damasio uses the adjective “secondary” to refer to “adult” emotions, which utilize the machinery of primary emotions by influencing the acquisition of “dispositional representations”, which are necessary for the elicitation of secondary emotions. These “acquired dispositional representations”, however, are believed to be different from the “innate dispositional representations” underlying primary emotions. Furthermore, secondary emotions influence bodily expressions through the same mechanisms as primary emotions.

18

2.2

C. Becker-Asano and I. Wachsmuth

Related Work

El-Nasr et al. [13] present FLAME as a formalization of the dynamics of 14 emotions based on fuzzy logic rules, that includes a mood value, which is continuously calculated as the average of all emotion intensities to provide a solution to the problem of conflicting emotions being activated at the same time. The idea of expectations is realized in FLAME by means of learning algorithms based on rewards and punishments. Although it was not integrated into the simulation of a virtual human, the mutual influence of emotion and mood is quite similar to the conception of emotion dynamics in the WASABI architecture. Marsella and Gratch [23] focus with their “EMA” model of emotions on the dynamics of emotional appraisal. They also argue for a mood value as an addend in the calculation of otherwise equally activated emotional states following the idea of mood-congruent emotions. Their framework for modeling emotions is the first fully implemented, domain-independent architecture for emotional conversational agents. Marinier and Laird [21] aim to combine the work of Gratch and Marsella [23] with the findings of Damasio [9]. In later publications [22], however, Damasio’s work is less central and they follow the ideas of Scherer [34]. Their central idea of “appraisal frames” is based on the EMA model [23] and eleven of Scherer’s sixteen appraisal dimensions are modeled for integration in the Soar cognitive architecture. They distinguish an “Active Appraisal Frame”, which is the result of a momentary appraisal of a given event, from a “Perceived Appraisal Frame”, which results from the combination of the actual mood and emotion frames. Thereby, they claim to account for Damasio’s distinction between emotion and feeling—similarly to the conception underlying the WASABI architecture. Although Andr´e et al. [1] start with the distinction of primary and secondary emotions, it is not taken up in their later publications, e.g. [16]. Gebhard [15], recently, uses the PAD space to derive a mood value from emotions resulting from OCC-based appraisal. Three-dimensional emotion spaces similar to PAD space are also used to drive the sociable robot “Kismet” [7] or the humanoid robot WE-4RII [19].

3

The WASABI Architecture

The WASABI architecture conceptualized here builds upon previous work on the simulation of emotion dynamics for the virtual human MAX [2] that has proven to support the agent’s believability in two different interaction scenarios [3,6]. It was, however, limited to the simulation of primary emotions. Accordingly, the WASABI architecture [5] combines bodily emotion dynamics with cognitive appraisal in order to simulate infant-like primary emotions as well as cognitively elaborated, more adult secondary emotions. In the following a suitable specification of the different concepts emotion and mood is derived from the theoretical background presented above.

Affect Simulation with Primary and Secondary Emotions

19

Emotions are understood as current states with a specific quality and intensity, which are the outcome of complex neurophysiological processes for communication. The processes include neural activity of the brain as well as physiological responses of the body. One gets aware of one’s emotions in two cases: (1) if their awareness likelihood w exceeds a certain threshold (cf. Section 3.3) or (2) if one concentrates on the underlying processes by means of introspection. Emotions can be classified into primary and secondary ones, but every emotion has either positive or negative valence of a certain value and compared to mood an emotion lasts significantly less long. The differences between primary and secondary emotions are conceptualized as follows: – Secondary emotions are based on more complex data structures than primary ones. Accordingly, only some general aspects of secondary emotions (such as their valence components) are represented in PAD space. – The appraisal of secondary emotions depends much more on the situational and social context than that of primary emotions. Thus, secondary emotions are more dependent on the agent’s cognitive reasoning abilities. – The releasers of secondary emotions might be learned based on the history of primary emotions in connection with memories of events, agents and objects. – The agent’s facial expressions of primary emotions (cf. Figure 1) may accompany secondary emotions. – Secondary emotions also modulate the agent’s simulated embodiment. Mood is understood as a background state with a much simpler affective quality than emotions. It is assumed that bodily responses influence the development of mood over time and that a mood is a diffuse valenced state, i.e. the experiencing individual is likely to be unable to give a clear reason for his or her current mood. Emotions have a fortifying or alleviating effect on an individual’s mood, which, in turn, influences the elicitation of emotions [24]. A mood’s duration is in general longer than that of emotions. 3.1

Nine Primary Emotions

Primary emotions (PE) are inborn affective states, which are triggered by reflexes in case of potentially harmful stimuli. They result in fast, reactive behavioral responses and, thus, are quite similar to the concept of proto-affect proposed by [26]. According to developmental psychology, young children express their (primary) emotions directly, because they did not yet internalize this process as in the case of adults [18]. In our previous realization of emotion dynamics [2] this direct expression of primary emotions is achieved by implementing five of Ekman’s six “basic emotions”. In addition, the emotions “bored”, “annoyed”, and “depressed” as well as the non-emotional state “concentrated” are simulated. Each of the primary emotions is located in PAD space according to Table 1, for which the coordinates are derived from some of the values given in [31, p. 286ff]. The seven facial expressions of MAX corresponding to the eight primary emotions and the neutral state “concentrated” (cf. Table 1) are shown in Figure 1. In case of high

20

C. Becker-Asano and I. Wachsmuth

Fig. 1. The nine primary emotions of Table 1 as points together with the three secondary emotions of Table 2 as weighted areas in PAD space

pleasure Ekman’s set of “basic emotions” [12] only contains one obviously positive emotion, namely happiness. Thus, in the presented implementation [5] this primary emotion covers the whole area of positive pleasure regardless of arousal or dominance as it is located in PAD space four times altogether. 3.2

Three Secondary Emotions

According to Damasio, the elicitation of secondary emotions involves a “thought process”, in which the actual stimulus is evaluated against previously acquired experiences and online generated expectations. The “prospect-based emotions” cluster of the OCC-model of emotions [25] is considered here to belong to the class of secondary emotions, because their appraisal process includes the evaluation of events against experiences and expectations. This OCC-cluster consists of six emotions, of which hope, fears-confirmed, and relief are simulated in the WASABI architecture. Hope. Ortony et al. describe hope as resulting from the appraisal of a prospective event [25]. If the potential event is considered desirable for oneself, one is likely to be “pleased about the prospect of a desirable event” [25, p. 110]. The calculation of this emotion’s awareness likelihood, however, is rather independent from these cognitive processes. The previous analysis provides the rationale for modeling hope in the following way: – Pleasure: The awareness likelihood of hope increases the more pleasurable the agent feels.

Affect Simulation with Primary and Secondary Emotions

21

Table 1. Primary emotions in PAD space: The five “basic emotions” of [11] are assigned to the corresponding facial expressions modeled in [2] whenever such a mapping is possible (cp. Figure 1) and additionally an individual base intensity ipe is set for each primary emotion (see also Section 3.3) PE final 1. angry 2. annoyed 3. bored 4. concentrated 5. depressed 6. fearful 7. happy

Facial expr. (Ekman) anger (anger ) sad (sadness) bored (none) neutral (none) sad (sadness) fear (fear ) happy (happiness)

8. sad 9. surprised

sad (sadness) surprised (surprise)

PAD values (80, 80, 100) (-50, 0, 100) (0, -80, 100) (0, 0, ±100) (0, -80, -100) (-80, 80, 100) (80, 80, ±100) (50, 0, ±100) (-50, 0, -100) (10, 80, ±100)

base intensity ipe 0.75 0.75 0.75 0.75 0.75 0.25 0.75 0.75 0.0

– Arousal: With respect to an agent’s arousal, hope is more likely elicited the higher the agent’s arousal value. – Dominance: The awareness likelihood of hope is modeled to be independent of the agent’s general level of dominance. To realize this distribution of awareness likelihood in case of hope, two areas are introduced in Figure 1, one in the high dominance plane and the other in the low dominance plane. In Table 2 the exact values of the four corners of each of the two areas together with the respective base intensity in each corner is given for hope. Fears-confirmed. According to Ortony et al., fears-confirmed is elicited when being “displeased about the confirmation of the prospect of an undesirable event.” [25, p. 110] With respect to its representation in PAD space the similarity to the primary emotion fearful is taken into account and the following decisions are taken: – Pleasure: The awareness likelihood of fears-confirmed increases the less pleasurable the agent feels. – Arousal: fears-confirmed is considered to be independent from the agent’s arousal value. – Dominance: fears-confirmed can only be perceived by the agent, when he feels submissive as in the case of fearful. This distribution of awareness likelihood is realized in PAD space (cf. Figure 1) by introducing an area in the low dominance plane (cf. Table 2 for the exact coordinates and intensities). Relief. The secondary emotion relief is described as being experienced whenever one is “pleased about the disconfirmation of the prospect of an undesirable event.” [25, p. 110] Taking the similarity with Gehm and Scherer’s “content” cluster into account [17], relief is represented in PAD space according to the following considerations:

22

C. Becker-Asano and I. Wachsmuth

Table 2. The parameters of the secondary emotions hope, fears-confirmed, and relief for representation in PAD space Area (PAD values), intensity HOPE high dominance (100, 0, 100), 0.6; (100, 100, 100), 1.0; (-100, 100, 100), 0.5; (-100, 0, 100), 0.1 low dominance (100, 0, -100), 0.6; (100, 100, -100), 1.0; (-100, 100, -100), 0.5; (-100, 0, -100), 0.1 FEARS-CONFIRMED low dominance (-100, 100, -100), 1.0; (0, 100, -100), 0.0; (0, -100, -100), 0.0; (-100, -100, -100), 1.0 RELIEF high dominance (100, 0, 100), 1.0; (100, 50, 100), 1.0; (-100, 50, 100), 0.2; (-100, 0, 100), 0.2 low dominance (100, 0, -100), 1.0; (100, 50, -100), 1.0; (-100, 50, -100), 0.2; (-100, 0, -100), 0.2

– Pleasure: relief is more likely to become aware the more pleasurable the agent feels. – Arousal: Only in case of relatively low arousal levels the agent is assumed to get aware of the emotion relief. – Dominance: The awareness likelihood of relief is considered to be independent from the agent’s state of dominance. The awareness likelihood is represented in Figure 1 by two areas, one located in the high dominance plane and the other in the low dominance plane (cf. Table 2). 3.3

Emotion Dynamics and Awareness Likelihood

The implementation of emotion dynamics is based on the assumption that an organisms natural, homeostatic state is characterized by emotional balance, which accompanies an agent’s normal level of cognitive processing [36]. Whenever an emotionally relevant internal or external stimulus is detected, however, its valence component serves as an emotional impulse, which disturbs the homeostasis causing certain levels of pleasure and arousal in the emotion module. Furthermore, a dynamic process is started by which these values are continuously driven back to the state of balance (see [3] for details). The two valences are mathematically mapped into PAD space (cf. Figure 1) and combined with the actual level of Dominance, which is derived from the situational context in the cognition of the architecture. This process results in a course of a reference point in PAD space representing the continuously changing, bodily feeling state from which the awareness likelihood of primary and secondary emotions is derived with an update rate of 25Hz (see also [4,6]).

Affect Simulation with Primary and Secondary Emotions

23

Awareness likelihood of primary emotions. The awareness likelihood of any of the nine primary emotions pe (cf. Table 1) increases the smaller the distance between the actual PAD values and the primary emotion’s PAD values (i.e. dpe in Equation 1). When dpe falls below Φpe units for a particular primary emotion pe, the calculation of its awareness likelihood wpe is started according to Equation 1 until dpe falls below Δpe units in which case the likelihood is wpe equals the primary emotion’s base intensity ipe . wpe = (1 −

dpe − Δpe ) · ipe , Φpe − Δpe

with

Φpe > Δpe ∀pe ∈ {pe1 , . . . , pe9 }

(1)

In Equation 1, Φpe can be interpreted as the activation threshold and Δpe as the saturation threshold, which can be adjusted for every primary emotion pen ∈ {pe1 , . . . , pe9 } independently3 . By setting a primary emotion’s base intensity ipe to 0.0 (as in the case of surprised, cf. Table 1) it needs to be triggered by the cognition before it might gain a non-zero awareness likelihood wpe . In case of primary emotions that are represented in PAD space more than once (i.e. concentrated, happy, and surprised; cf. Table 1) the representation with the minimum distance to the reference point is considered in Equation 1 for calculation of its awareness likelihood. Awareness likelihood of secondary emotions. With representing the three secondary emotions in PAD space their mood-congruent elicitation can be assured, because the actual PAD values are also relevant for calculating every secondary emotion’s awareness likelihood. In contrast to most primary emotions, all secondary emotion’s base intensities are set to zero by default (cp. the case of surprised above). Accordingly, every secondary emotion has first to be triggered by a cognitive process, before it gains the potential to get aware to the agent. Furthermore, a secondary emotion’s lifetime parameter (set to 10.0 by default) together with its decay function (set to linear by default) are used to decrease its intensity over time until its base intensity of zero is reached again. In the Skip-Bo card game scenario [6], for example, Max might believe that the opponent may play a card hindering him to fulfill one of his goals and the expectation of an undesirable event is generated. Later, upon perceiving and interpreting the opponent’s actions, Max might realize that the opponent fulfilled the expectation by playing that undesired card. In result the secondary emotion fears-confirmed is triggered and, at the same time, a negative emotional impulse is sent to the emotion dynamics. If the corresponding PAD values fall into the fears-confirmed region (cf. Figure 1), Max will get aware that his fears are confirmed with a likelihood that results from linear interpolation between the current intensities in the four corners of the fears-confirmed area at the location given by the actual PAD values. Currently, each non-zero likelihood of a secondary emotion lets MAX produce an appropriate verbal expression (cf. Figure 2(a) and Figure 2(b)). 3

The nine primary emotions are indexed according to Table 1.

24

C. Becker-Asano and I. Wachsmuth

(a) MAX expresses his hope that the human player will play the card with the number seven next by saying “Kannst Du nicht die 7 spielen?” (Can’t you play the seven?)

(b) MAX realizes that his fears just got confirmed and utters “Das hatte ich schon befrchtet!” (I was already afraid of that!)

Fig. 2. MAX expressing his hope and realizing that his fears got confirmed

3.4

Connecting Cognition and Embodiment

In Figure 3 the conceptual distinction of an agent’s simulated embodiment and its cognition is presented and the different modules and components of the WASABI architecture are assigned to the corresponding layers. To the left of Figure 3 the virtual human MAX perceives some (internal or external) stimulus. Non-conscious appraisal is realized by directly sending a small positive emotional impulse to the Emotion dynamics component of the WASABI architecture. This establishes the “low road” [20] of primary emotion elicitation. For example, the presence of visitors in the museum [2] is interpreted as intrinsically pleasant following the ideas of Scherer [34]. Another path resulting in emotional impulses begins with conscious appraisal of the perceived stimulus (cf. Figure 3, top left). This process resides in the Cognition, because it is based on the evaluation of goal-conduciveness of an event [34] and can be considered the “high road” of emotion elicitation [20]. Therefore, MAX exploits his BDI-based cognitive reasoning abilities to update his memory and generate expectations. These deliberative processes not only enable MAX to derive his subjective level of Dominance from the situational and social context, but also propose cognitively plausible secondary emotions. These secondary emotions are, however, first filtered in PAD space, before MAX might get aware of them (cf. Figure 3, middle). Independent of this “awareness filter”, every cognitively plausible secondary emotion influences the Emotion dynamics component of the WASABI architecture, thereby modulating MAX’s Pleasure and Arousal values, i.e. his simulated Embodiment. This influence is achieved by interpreting the valence component of any secondary emotion as an emotional impulse (cf. Figure 3, left). This way, secondary emotions “utilize the

Affect Simulation with Primary and Secondary Emotions

25

Fig. 3. The conceptual distinction of cognition and embodiment in the WASABI architecture

machinery of primary emotions” [9], because they might result in the elicitation of mood-congruent primary emotions, which in the WASABI architecture drive MAX’s facial expressions involuntarily. Furthermore, as the Pleasure and Arousal values are incessantly modulating MAX’s involuntary behaviors (i.e. breathing and eye blinking) as well, even “unaware” secondary emotions have an effect on MAX’s bodily state and involuntary behavior. In combination with the actual level of Dominance, primary emotions are elicited by means of a distance metric in PAD space. As mentioned before, these primary emotions are directly driving MAX’s facial expressions. Although this automatism might be considered unnatural for an adult, it has proven applicable and believable in the situational contexts in which MAX was integrated so far. After the awareness filter has been applied, the resulting set of aware emotions consists of primary and secondary emotions together with their respective awareness likelihoods. They are finally subject to further deliberation and reappraisal resulting in different coping behaviors. In the card game scenario the direct vocal and facial expression of negative emotions is sufficient to let the human players play in accordance with the rules.

4

Discussion and Conclusion

We presented the WASABI architecture for mood-congruent simulation of primary and secondary emotions as it is integrated in, and makes use of, the overall

26

C. Becker-Asano and I. Wachsmuth

cognitive architecture of our Virtual Human MAX. The simulation and direct expression of primary emotions is based on the idea to capture an agent’s bodily feeling state as a continuous progression in three-dimensional emotion space (i.e. PAD space), which is only subsequently translated into weighted, primary emotions. Secondary emotions, in contrast, are understood as onto-genetically later types of emotions, which require higher cognitive reasoning abilities and a certain sense of time, in that an agent has to be able to take experiences and expectations into account to generate prospect-based emotions. To also assure mood-congruency of secondary emotions, we roughly capture aspects of their connotative meanings in PAD space as well by introducing weighted areas. Furthermore, to account for the decisive influence of cognitive processes in the elicitation of secondary emotions, they can only gain a certain awareness likelihood in PAD space of the embodiment, after having been triggered by cognitive processes. Although our approach has not yet been subject to an extended evaluation, some empirical evidence on the effect of secondary emotion simulation could already be gathered. A total of 23 participants played Skip-Bo either against MAX only simulating and expressing primary emotions (similar to the negative empathic condition in [28], n=11) or against MAX additionally simulating and verbally expressing secondary emotions (cf. Figure 2, n=12). As a result, MAX with primary and secondary emotions “in concert” was judged significantly older (mean value 27,5 years, standard deviation 7.5) than MAX with simulated primary emotions alone (mean value 19,8 years, standard deviation 7.7)4 . In summary, we believe that the WASABI architecture is a helpful model to understand how the dynamic interplay of a human’s body and mind together with his past experiences and future expectations sometimes turns “cold” cognitions into “hot” affective states.

References 1. Andr´e, E., Klesen, M., Gebhard, P., Allen, S., Rist, T.: Integrating models of personality and emotions into lifelike characters. In: Proceedings International Workshop on Affect in Interactions - Towards a New Generation of Interfaces, pp. 136–149 (1999) 2. Becker, C., Kopp, S., Wachsmuth, I.: Simulating the emotion dynamics of a multimodal conversational agent. In: Andr´e, E., Dybkjær, L., Minker, W., Heisterkamp, P. (eds.) ADS 2004. LNCS (LNAI), vol. 3068, pp. 154–165. Springer, Heidelberg (2004) 3. Becker, C., Kopp, S., Wachsmuth, I.: Why emotions should be integrated into conversational agents. In: Nishida, T. (ed.) Conversational Informatics: An Engineering Approach, November 2007, ch. 3, pp. 49–68. Wiley, Chichester (2007) 4. Becker, C., Wachsmuth, I.: Modeling primary and secondary emotions for a believable communication agent. In: Reichardt, D., Levi, P., Meyer, J.-J.C. (eds.) Proceedings of the 1st Workshop on Emotion and Computing, Bremen, pp. 31–34 (2006) 4

A two-tailed t-test assuming unequal variances results in p = 0.025. No significant effect of the participants’ gender could be found.

Affect Simulation with Primary and Secondary Emotions

27

5. Becker-Asano, C.: WASABI: Affect Simulation for Agents with Believable Interactivity. PhD thesis, AI Group, University of Bielefeld (to appear, 2008) 6. Becker-Asano, C., Kopp, S., Pfeiffer-Lemann, N., Wachsmuth, I.: Virtual Humans Growing up: From Primary Toward Secondary Emotions. KI Zeitschift (German Journal of Artificial Intelligence) 1, 23–27 (2008) 7. Breazeal, C.: Emotion and sociable humanoid robots. International Journal of Human-Computer Studies 59, 119–155 (2003) 8. Cassell, J., Sullivan, J., Prevost, S., Churchill, E.: Embodied Conversational Agents. The MIT Press, Cambridge (2000) 9. Damasio, A.: Descartes’ Error, Emotion Reason and the Human Brain. Grosset/Putnam (1994) 10. Damasio, A.: Looking for Spinoza: Joy, Sorrow, and the Feeling Brain. Harcourt (2003) 11. Ekman, P.: Facial expressions. In: Handbook of Cognition and Emotion, ch. 16, pp. 301–320. John Wiley, Chichester (1999) 12. Ekman, P., Friesen, W., Ancoli, S.: Facial sings of emotional experience. Journal of Personality and Social Psychology 29, 1125–1134 (1980) 13. El-Nasr, M.S., Yen, J., Ioerger, T.R.: FLAME - Fuzzy Logic Adaptive Model of Emotions. Autonomous Agents and Multi-Agent Systems 3(3), 219–257 (2000) 14. Feldman Barrett, L.: Feeling is perceiving: Core affect and conceptualization in the experience of emotion. In: The unconscious in emotion, ch. 11, pp. 255–284. Guilford Press, New York (2005) 15. Gebhard, P.: ALMA - A Layered Model of Affect. In: Autonomous Agents & Multi Agent Systems, pp. 29–36 (2005) 16. Gebhard, P., Klesen, M., Rist, T.: Coloring multi-character conversations through the expression of emotions. In: Andr´e, E., Dybkjær, L., Minker, W., Heisterkamp, P. (eds.) ADS 2004. LNCS (LNAI), vol. 3068, pp. 128–141. Springer, Heidelberg (2004) 17. Gehm, T.L., Scherer, K.R.: Factors determining the dimensions of subjective emotional space. In: Scherer, K.R. (ed.) Facets of Emotion, ch. 5. Lawrence Erlbaum Associates, Mahwah (1988) 18. Holodynski, M., Friedlmeier, W.: Development of Emotions and Emotion Regulation. Springer, Heidelberg (2005) 19. Itoh, K., Miwa, H., Nukariya, Y., Zecca, M., Takanobu, H., Roccella, S., Carrozza, M., Dario, P., Takanishi, A.: Behavior generation of humanoid robots depending on mood. In: 9th International Conference on Intelligent Autonomous Systems (IAS9), pp. 965–972 (2006) 20. LeDoux, J.: The Emotional Brain. Touchstone. Simon & Schuster (1996) 21. Marinier, R., Laird, J.: Toward a comprehensive computational model of emotions and feelings. In: International Conference on Cognitive Modeling (2004) 22. Marinier, R.P., Laird, J.E.: Computational modeling of mood and feeling from emotion. In: CogSci, pp. 461–466 (2007) 23. Marsella, S., Gratch, J.: EMA: A computational model of appraisal dynamics. In: European Meeting on Cybernetics and Systems Research (2006) 24. Neumann, R., Seibt, B., Strack, F.: The influence of mood on the intensity of emotional responses: Disentangling feeling and knowing. Cognition & Emotion 15, 725– 747 (2001) 25. Ortony, A., Clore, G.L., Collins, A.: The Cognitive Structure of Emotions. Cambridge University Press, Cambridge (1988)

28

C. Becker-Asano and I. Wachsmuth

26. Ortony, A., Norman, D., Revelle, W.: Affect and proto-affect in effective functioning. In: Fellous, J., Arbib, M. (eds.) Who needs emotions: The brain meets the machine, pp. 173–202. Oxford University Press, Oxford (2005) 27. Picard, R.W.: Affective Computing. The MIT Press, Cambridge (1997) 28. Prendinger, H., Becker, C., Ishizuka, M.: A study in users physiological response to an empathic interface agent. International Journal of Humanoid Robotics 3(3), 371–391 (2006) 29. Prendinger, H., Ishizuka, M. (eds.): Life-Like Characters. Tools, Affective Functions, and Applications. Cognitive Technologies. Springer, Heidelberg (2004) 30. Rao, A., Georgeff, M.: Modeling Rational Agents within a BDI-architecture. In: Allen, J., Fikes, R., Sandewall, E. (eds.) Proc. of the Intl. Conference on Principles of Knowledge Representation and Planning, pp. 473–484. Morgan Kaufmann publishers Inc, San Mateo (1991) 31. Russell, J., Mehrabian, A.: Evidence for a three-factor theory of emotions. Journal of Research in Personality 11(11), 273–294 (1977) 32. Russell, J.A., Feldmann Barrett, L.: Core affect, prototypical emotional episodes, and other things called emotion: Dissecting the elephant. Journal of Personality and Social Psychology 76(5), 805–819 (1999) 33. Scherer, K.R.: On the nature and function of emotion: A component process approach. In: Scherer, K., Ekman, P. (eds.) Approaches to Emotion, pp. 293–317. Lawrence Erlbaum, Mahwah (1984) 34. Scherer, K.R.: Appraisal considered as a process of multilevel sequential checking. In: Scherer, K.R., Schorr, A., Johnstone, T. (eds.) Appraisal Processes in Emotion, ch. 5. Oxford University Press, Oxford (2001) 35. Scherer, K.R.: Unconscious processes in emotion: The bulk of the iceberg. In: Niedenthal, P., Feldman Barrett, L., Winkielman, P. (eds.) The unconscious in emotion. Guilford Press, New York (2005) 36. Sloman, A., Chrisley, R., Scheutz, M.: The architectural basis of affective states and processes. In: Who needs emotions? Oxford University Press, Oxford (2005) 37. Wundt, W.: Vorlesung u ¨ber die Menschen- und Tierseele. Voss Verlag, Leipzig 1922/1863

User Study of AffectIM, an Emotionally Intelligent Instant Messaging System Alena Neviarouskaya1, Helmut Prendinger2, and Mitsuru Ishizuka1 1

University of Tokyo, Department of Information and Communication Engineering, Japan [email protected],[email protected] 2 National Institute of Informatics, Japan [email protected]

Abstract. Our research addresses the tasks of recognition, interpretation and visualization of affect communicated through text messaging. In order to facilitate sensitive and expressive interaction in computer-mediated communication, we previously introduced a novel syntactical rule-based approach to affect recognition from text. The evaluation of the developed Affect Analysis Model showed promising results regarding its capability to accurately recognize affective information in text from an existing corpus of informal online conversations. To enrich the user’s experience in online communication, make it enjoyable, exciting and fun, we implemented a web-based IM application, AffectIM, and endowed it with emotional intelligence by integrating the developed Affect Analysis Model. This paper describes the findings of a twenty-person study conducted with our AffectIM system. The results of the study indicated that automatic emotion recognition function can bring a high level of affective intelligence to the IM application. Keywords: Affective sensing from text, affective user interface, avatar, emotions, online communication, user study.

1 Introduction and Motivation The essentialness of emotions to social life is manifested by the rich history of theories and debates about emotions and their nature. Recently, the task of recognition of affective content conveyed through written language is gaining increased attention by researchers interested in studying different kinds of affective phenomena, including sentiment analysis, subjectivity and emotions. In order to analyse affect communicated through written language, researchers in the area of natural language processing proposed a variety of approaches, methodologies and techniques [2,8-11]. Advanced approaches targeting at textual affect recognition performed at the sentence-level are described in [1,3,4]. The lexical, grammatical approach introduced by Mulder et al. [4] focused on the propagation of affect towards an object. Boucouvalas [1] developed the Text-to-Emotion Engine based on word tagging and analysis of sentences. An approach for understanding the underlying semantics of language using large-scale real-world commonsense knowledge was proposed by Liu et al. [3], who H. Prendinger, J. Lester, and M. Ishizuka (Eds.): IVA 2008, LNAI 5208, pp. 29–36, 2008. © Springer-Verlag Berlin Heidelberg 2008

30

A. Neviarouskaya, H. Prendinger, and M. Ishizuka

incorporated the created affect sensing engine into an affectively responsive email composer called EmpathyBuddy. Peris et al. [7] argues that online chats may stimulate rather than inhibit social relations, and chat users seem to find a media for rich, intense, and interesting experiences. The motivation behind our research is to enrich social interactivity and emotional expressiveness of real-time messaging, where a machine is used as a communication channel connecting people and transmitting human emotions. Here, a key issue is to provide the automation of multiple expressive means so that the user does not have to worry about visual self-presentation as in standard Instant Messaging (IM) systems, but can focus on the textual content of the conversation. While constructing our Affect Analysis Model we took into account crucial aspects of informal online conversation such as its specific style and evolving language [5]. The remainder of the paper is structured as follows. In Section 2 we shortly introduce the developed Affect Analysis Model. We describe the developed IM application integrated with the Affect Analysis Model and analyse the results of a user study in Section 3 and Section 4, respectively. In Section 5 we conclude the paper.

2 Rule-Based Approach to Affect Sensing from Text We proposed a rule-based approach to affect sensing from text at a sentence-level (details are given in [6]). The algorithm for analysis of affect in text consists of five stages: (i) symbolic cue analysis, (ii) syntactical structure analysis, (iii) word-level analysis, (iv) phrase-level analysis, and (v) sentence-level analysis. The salient features of this algorithm are: (1) analysis of nine emotions and five communicative functions on the level of individual sentences; (2) the ability to handle the evolving language of online communications; (3) foundation in affect database; (4) vector representation of affective features of words, phrases, clauses and sentences; (5) consideration of syntactic relations in a sentence; (6) analysis of negation, modality, and conditionality; (7) consideration of relations between clauses in compound, complex, or complex-compound sentences; and (8) emotion intensity estimation. An empirical evaluation of the Affect Analysis Model algorithm [6] showed promising results regarding its capability to accurately classify affective information in text from an existing corpus of informal online communication. In a study based on blog entries, the system result agreed with at least two out of three human annotators in 70% of the cases.

3 Instant Messaging Application Integrated with the Affect Analysis Model The AffectIM, an Instant Messaging system with emotional intelligence, was developed as a web-based application running in the Internet browser. Within our research project, we could design only two avatars, one male and one female. So the graphical representative is automatically selected by the system according to the user’s sex.

User Study of AffectIM, an Emotionally Intelligent Instant Messaging System

31

The main window of AffectIM system while online conversation is shown in Fig. 1. From the list of friends displayed in the left frame, the user selects the person (available online), whom he or she wishes to communicate with. The central frame allows user to type and to send the messages. It displays the conversation flow in three modes: plain, transcribed, and with emotions. Further, it displays emoFig. 1. AffectIM interface tional avatars (own – to the left of conversation field, and friend’s – to the right). Two buttons located under the avatar animation refer to the visualization of emotion distribution (either in a color bar or pie graph) and emotion dynamics (line graph). Since the language of online communication is constantly evolving, AffectIM also provides the functionality to add new abbreviations, acronyms, and emoticons to the Affect database (see two buttons located to the left from the input text field).

4 User Study of the AffectIM System The purpose of the user study was to evaluate “richness of experience” and “affective intelligence” of our AffectIM system. We hypothesized that user experience and effectiveness of the communication of emotions may benefit from introduction of automatic emotion recognition function and emotionally expressive avatars to IM application, as opposed to manual selection of emotional behavior or uninformed, random display of affect. Our hypotheses are tested by considering the following dimensions regarding users’ experience: (1) Interactivity. (2) Involvement (engagement). (3) Sense of copresence. (4) Enjoyment. (5) Affective intelligence. (6) Overall satisfaction. In addition to these main criteria, we asked participants to give us feedback on some general questions. 4.1 Experimental Design, Subjects and Procedure The experiment was designed as a within-subjects experiment in pairs. In particular, we compared three AffectIM interfaces using different configuration conditions. For the user study, we prepared three versions of the system: 1. Automatic (A-condition). In this interface, affect sensing from text is performed based on the developed Affect Analysis Model, and the recognized emotions are conveyed by the avatar expressions. 2. Manual (M-condition). During this condition, no automatic emotion recognition from text is performed; however, users may select emotion (and its intensity) to be shown by avatars using “select pop-up menus”. 3. Random (R-condition). Here, the avatars show a ‘quasi-random’ reaction. First, we process each sentence using the Affect Analysis Model, and then we apply

32

A. Neviarouskaya, H. Prendinger, and M. Ishizuka

two rules: (1) if the output is emotional, we run two functions that randomly select the emotion out of nine available emotions and its intensity, correspondingly; (2) for the case of “neutral” output, we set the function that generates “neutral” emotion with the probability of 60% or “random” emotion with the probability of 40%. It is important to note that in each of three interfaces five communicative functions are automatically recognized and shown by avatars. In other words, the occurrence of communicative behavior is not varied across the three experimental conditions. Twenty university students and staff (10 males, 10 females) took part in our study. All of them were computer literate, and 19 persons had prior experience with computer based chat or Instant Messaging system. Each pair of participants was composed by male and female subjects. Before the IM session, all participants were given instructions and their AffectIM IDs and passwords. Each pair of participants was asked to have online conversations through three interfaces given in random order. After each interface condition, users filled the corresponding page of the questionnaire in and commented on their experience. After the participants completed the IM communication about the three topics and corresponding questionnaire, they were asked to answer some general questions about their experience with the IM system. 4.2 Analysis of Results The average duration of sessions on each interface was 10.1 minutes (minimum 8 and maximum 12.5 minutes), excluding the time needed to fill out the questionnaires. The 11 questions on main criteria were answered based on 7-item agreement Likert scale. Since our study involved each subject being measured under each of three conditions, we analyzed data using statistical method ANOVA (two-factor ANOVA without replications with chosen significance level p < 0.05). The interactivity was measured using statement “The system was interactive”. Subjects tended to consider the condition, which allowed them to manipulate the expressed emotion manually, as most interactive. However, ANOVA resulted in no significant difference in interactivity among three interfaces. The involvement was evaluated using two questionnaire items: “I felt it was important for my conversation partner that I responded after each his/her statement” and “I was awaiting the replies of my conversation partner with true interest”. The ANOVA results showed that the reported involvement of all three systems does not differ significantly, showing that the level of engagement was almost the same. The following two questionnaire items covering the aspects of space and togetherness are intended for evaluation of sense of copresence, or social presence: “I felt if I were communicating with my conversation partner in the shared virtual space”, “The system gave me the sense that the physical gap between us was narrowed”. The statistic ANOVA results for the first questionnaire item indicated the significance of the difference in sense of copresence felt in A-condition and R-condition (p (R-A) < 0.05), and showed that the A-condition gave stronger feeling of communication in the shared virtual space than the R-condition. No significant difference among three interfaces was reported on the second statement.

User Study of AffectIM, an Emotionally Intelligent Instant Messaging System

33

The level of enjoyment was evaluated using the statement “I enjoyed the communication using this IM system”. The high levels of enjoyment were reported during Acondition and M-condition. However, ANOVA resulted in no significant differences among all three IM interfaces. To evaluate affective intelligence, four statements (three – directly related to the system and one – indirectly related) were proposed to subjects in questionnaire: “The system was successful at conveying my feelings”, “The system was successful at conveying my partner’s feelings”, “The emotional behavior of the avatars was appropriate”, and “I understood the emotions of my communication partner”. Fig. 2 shows the bar graphs of means of questionnaire results for these statements.

Q1: The system was successful at conveying my feelings.

Q2: The system was successful at conveying my partner’s feelings. interface

Automatic Manual Random

2

3

4

5

6

interface

1

Automatic

7

Manual Random

score

1

2

3

4

5

6

7

score

Q3: The emotional behavior of the avatars was appropriate.

Q4: I understood the emotions of my communication partner. interface

Automatic

1

2

3

4

5

6

7

interface

Manual Random

Automatic Manual Random

score

1

2

3

4

5

6

7

score

Fig. 2. Questionnaire results on affective intelligence (Q1, Q2, Q3, Q4)

As seen from the graph bar for Q1 (Fig. 2), the systems in A-condition and M-condition (with small prevalence of mean results in M-condition) were both more successful at conveying own feelings than the system in R-condition. Since M-condition is considered as a “gold standard” in communicating person’s emotions, and ANOVA showed no significant difference between M-condition and A-condition, we might say that automatic emotion recognition system performed well enough to bring high affective intelligence to IM application. As was expected, significant differences were found between R-condition and M-condition (p (R-M) < 0.05), and between R-condition and A-condition (p (R-A) < 0.01). While evaluating successfulness of the interfaces at conveying conversation partner’s feelings (see Q2 in Fig. 2), the highest rate was given by subjects to M-condition, and the lowest – to R-condition. However, ANOVA for this criterion

34

A. Neviarouskaya, H. Prendinger, and M. Ishizuka

e

ur pr is e

S

ss

ha m

S

Jo y

ad ne

S

r

lt In te re st

G ui

st

Fe a

is gu

D

A

ng er

correct answer, %

resulted in no significant difference among all interfaces. One user’s comment regarding the emotional reactions of the partner’s avatar was: “I concentrated too much on the reactions of my avatar and not enough on that of my partner. Reading and thinking about the answer took away the concentration on the avatar”. Interesting results were observed for the evaluation of appropriateness of emotional behavior of avatars. As seen from the graph (Q3 in Fig. 2) and statistical data of ANOVA, results for A-condition and M-condition significantly prevailed those for Rcondition (p (R-A) < 0.01; and p (R-M) < 0.01). Users’ comments confirmed that during R-condition subjects sometimes couldn’t understand why the avatars did not correspond to their words and reacted in “wrong” ways. Although A-condition was rated a little bit higher than M-condition, no significant difference was detected between these interfaces. The statement “I understood the emotions of my communication partner” measured affective intelligence of the system indirectly, since people used to derive emotional content from text based on semantic information and their empathetic abilities. Emotional expressions of avatars may help to understand the partner’s emotion clearer. As was expected, the highest rate was reported in M-condition, and the lowest – in R-condition, where participants might be confused, since sometimes emotions shown by the avatar contradict actual emotional content (see Q4 in Fig. 2). However, no significant difference was found in partner’s emotion comprehension among all three interfaces. A possible explanation for such results might be that a person typically relies on his/her own affective intelligence rather than on results of artificial affective intelligence. That is why the mean for R-condition appeared relatively high. The overall satisfaction from using three AffectIM interfaces was evaluated using statement “I am satisfied with the experience of communicating via this system”. Regarding the results, average scores for A-condition and M-condition were equal (4.6), whereas less satisfaction was reported for R-condition (4.25). The results of ANOVA showed no significant difference in overall satisfaction among interfaces. In addition to the main questionnaire items, participants were given general Correct associations of em otion questions. Subjects were asked to associstates w ith avatar expressions ate nine emotion states with nine avatar expressions shown on still figures. Fe100 80 60 male avatar was shown to male subjects, 40 20 while male avatar was shown to female 0 subjects. The percentages of reported correct associations within males and females are shown in Fig. 3. As seen from males females the graph, all 10 female subjects correctly associated ‘anger’, ‘joy’, and ‘shame’ Fig. 3. Questionnaire results on emotions emotions, while all 10 male subjects associated with avatar expressions completely agreed only on ‘joy’ emotion. The detected pairs of most often confused emotions are ‘fear’ – ‘surprise’ and ‘guilt’ – ‘sadness’; and less often confused emotions are ‘guilt’ – ‘fear’ and ‘sadness’ – ‘fear’. Some participants confused emotions in ‘interest’ – ‘joy’ and ‘surprise’ – ‘guilt’ pairs. These results suggest that during the experiment some participants faced the difficulty with correct interpretations of emotional behavior of avatars.

User Study of AffectIM, an Emotionally Intelligent Instant Messaging System

35

percent, %

To the question “While online, do you use emoticons or abbreviations?”, 19 subjects answered positively. We observed all automatically recorded dialogs, and found that to some degree the majority of participants used abbreviated language. The participants’ comments and the results of answers to the question “To what degree do you think is necessary to look at a graphical representation of the other communicating person?” suggest that there are two types of IM users: (1) some are open to new features of IM, and find animated graphical representation of a person helpful in understanding the partner’s emotions and giving some sense of physical presence; (2) others tend to concentrate their attention on content, and prefer small emotional symbolic cues, like emoticons, to avatar expressions. Participants also were asked to indicate whether manual selections of emoReported frequency of usage of em otion "select m enu" tion state and intensity were helpful or not during M-condition. Only 30% of 60 males and 60% of females answered 40 positively. The result of answers to the 20 0 question “How often did you use this never rarely sometimes often alw ays function, when you wanted?” is repremales females sented as a bar graph in Fig. 4. As seen from these data, female subjects used emotion selection function more ardently Fig. 4. Questionnaire results on frequency of usage of emotion “select menu” than male subjects. The user opinions regarding the emotion “select menu” aspect were very diverse. Some users criticized the type of pop-up menu, commenting that it was difficult to use and it took long time to select. For more convenience, they proposed to replace pop-up menus by icons and spread them out. One of the subjects complained that emotion select menu disturbed the flow of the chat. Another reported problem is that since there is no preview of what the emotion expression looks like, it is unclear whether it matches the user’s intention. Some subjects felt that basic emotions are too general and are not sufficient to convey emotion in many cases. Also, they suggested providing the possibility of showing more different or even mixed emotions (some state between sadness and joy). However, we think that displaying mixed emotional expressions would add more confusion and misinterpretation to the conversations. Some subjects underlined positive aspects of manual selection of emotion states. They found this function helpful, because (1) it offered the possibility to visually express feelings and better understand them, (2) it allowed preventing inappropriate emotional reaction of avatar, and (3) guaranteed accuracy of communicated emotion. We can conclude that for sensitive conversation users would prefer manual control to avoid system mistakes that could sometimes harm the conversation.

5 Discussion and Conclusions We implemented a web-based IM application, AffectIM, and endowed it with emotional intelligence by integrating our Affect Analysis Model. The user study conducted on AffectIM showed that the IM system with automatic emotion recognition

36

A. Neviarouskaya, H. Prendinger, and M. Ishizuka

function was successful at conveying users’ emotional states during communication online, thus enriching expressivity and social interactivity of online communications. From the experiment we learned that the IM application might benefit from an integration of automatic emotion sensing with manual control of emotional behavior of avatars in one interface, which will allow users to select between two modes depending on type and sensitivity of conversation. While analyzing the recorded conversations from our study, we detected some misspelled emotion-related words: “feiled” instead of “failed”; “promissing” instead of “promising”, etc. In our future work, we plan to add correction of misspelled words to the system. Moreover, we aim to study cultural differences in perceiving and expressing emotions, and to integrate a text-to-speech engine with emotional intonations into the developed IM application. Acknowledgments. We acknowledge and thank Dr. Ulrich Apel and Dr. Boris Brandherm for their efforts and help in organization of the user study on AffectIM. We wish also to express our gratitude to all the participants of the experiment.

References 1. Boucouvalas, A.C.: Real Time Text-to-Emotion Engine for Expressive Internet Communications. In: Being There: Concepts, Effects and Measurement of User Presence in Synthetic Environments, pp. 306–318. IOS Press, Amsterdam (2003) 2. Kim, S.-M., Hovy, E.: Automatic Detection of Opinion Bearing Words and Sentences. In: Proceedings of IJCNLP 2005 (2005) 3. Liu, H., Lieberman, H., Selker, T.: A Model of Textual Affect Sensing using Real-World Knowledge. In: Proceedings of IUI 2003, pp. 125–132 (2003) 4. Mulder, M., Nijholt, A., den Uyl, M., Terpstra, P.: A Lexical Grammatical Implementation of Affect. In: Proceedings of the 7th International Conference on Text, Speech and Dialogue, pp. 171–178. Springer, Berlin (2004) 5. Neviarouskaya, A., Prendinger, H., Ishizuka, M.: Analysis of Affect Expressed through the Evolving Language of Online Communication. In: Proceedings of IUI 2007, pp. 278–281. ACM Press, New York (2007) 6. Neviarouskaya, A., Prendinger, H., Ishizuka, M.: Textual Affect Sensing for Sociable and Expressive Online Communication. In: Paiva, A.C.R., Prada, R., Picard, R.W. (eds.) ACII 2007. LNCS, vol. 4738, pp. 220–231. Springer, Heidelberg (2007) 7. Peris, R., Gimeno, M.A., Pinazo, D., et al.: Online Chat Rooms: Virtual Spaces of Interaction for Socially Oriented People. CyberPsychology and Behavior 5(1), 43–51 (2002) 8. Strapparava, C., Valitutti, A., Stock, O.: Dances with Words. In: Proceedings of IJCAI 2007, Hyderabad, India, pp. 1719–1724 (2007) 9. Subasic, P., Huettner, A.: Affect Analysis of Text Using Fuzzy Semantic Typing. IEEE Transactions on Fuzzy Systems 9(4), 483–496 (2001) 10. Turney, P.D.: Thumbs Up or Thumbs Down? Semantic Orientation Applied to Unsupervised Classification of Reviews. In: Proceedings of ACL 2002, USA (2002) 11. Wilson, T., Wiebe, J., Hoffmann, P.: Recognizing Contextual Polarity in Phrase-level Sentiment Analysis. In: Proceedings of HLT/EMNLP 2005, Vancouver, Canada (2005)

Expressions of Empathy in ECAs Radoslaw Niewiadomski1 , Magalie Ochs2 , and Catherine Pelachaud1,3 1

IUT de Montreuil, Universit´e Paris 8, France [email protected] 2 Universit´e Paris 6, France [email protected] 3 INRIA Paris-Rocquencourt, France [email protected]

Abstract. Recent research has shown that empathic virtual agents enable to improve human-machine interaction. Virtual agent’s expressions of empathy are generally fixed intuitively and are not evaluated. In this paper, we propose a novel approach for the expressions of empathy using complex facial expressions like superposition and masking. An evaluation study have been conducted in order to identify the most appropriate way to express empathy. According to the evaluation results people find more suitable facial expressions that contain elements of emotion of empathy. In particular, complex facial expressions seem to be a good approach to express empathy. Keywords: ECA, empathy, facial expressions.

1

Introduction

Recent research has shown that virtual agents expressing empathic emotions toward users have the potentiality to enhance human-machine interaction [1,2,3]. Empathy is commonly defined as the capacity to “put your-self in someone else’s shoes to understand her emotions” [4]. Several empathic virtual agents are already been developed. They express empathy in different ways. For instance, in [5], the virtual agent uses specific sentences (such as “I am sorry that you seem to feel a bit bad about that question”). In [6], the facial expression is used by supposing that empathic facial expressions of emotion are similar to expressions of felt emotion of the same type. Finally, virtual agent’s expressions of empathy are generally fixed intuitively and are not evaluated. In this paper, we evaluate our model that enables an ECA to display empathic emotions. Compared to other expressive empathic agents (e.g. [6,7]) our agent uses two types of facial expressions simple and complex ones. By simple facial expressions we intend spontaneous facial displays of emotional states (which can be described by one-word label) e.g. display of anger or contempt. The term complex facial expressions [8,9] describes expressions that are the combinations of several simple facial displays (e.g. superposition of two emotions) or that are modified voluntarily by the displayer (e.g. masking of one emotion by another H. Prendinger, J. Lester, and M. Ishizuka (Eds.): IVA 2008, LNAI 5208, pp. 37–44, 2008. c Springer-Verlag Berlin Heidelberg 2008 

38

R. Niewiadomski, M. Ochs, and C. Pelachaud

one). Aiming at finding the appropriate facial expression of an empathic ECA we examine both types of expressions in empathic situations. In section 2 we describe our architecture that computes the agents’s empathic emotion and the expression that it will display, while Section 3 reports the results of the evaluation study we conducted.

2

Architecture of Empathic Agent

The process of generation of facial expressions in our empathic agent is the following. It takes as input a user’s sentence. The dialog engine parses it and computes an answer for the agent. After the evaluation of what the user said, the Emotion Module computes which emotions are triggered for the agent. Finally the Facial Expressions Module computes the expression to be displayed by the agent called Greta [10]. The architecture is composed of three main modules: 1. The Dialog Engine is based on the JSA (Jade Semantics Agent) framework [11]. The JSA framework1 enables one to implement BDI-like dialog agent. Such agents are able to interpret the received message and to respond to it automatically. The dialog engine contains a representation of goals and beliefs of the agent and those of its interlocutor. Depending on the received messages and on the agent’s responses, the dialog engine updates the goals and beliefs. 2. Based on the goals and beliefs of both interlocutors: the agent and its interlocutor the Emotion Module computes the agent’s emotional state [6]. Two kinds of emotions are computed: egocentric emotions and empathic emotions. The former ones correspond to the emotions of the agent, given its own goals (excepted altruist goals) and beliefs. The empathic emotions are elicited according to the goals and beliefs of the agent’s interlocutor. For instance, an agent may be happy for its interlocutor because one of the interlocutor’s goals was achieved (empathic emotion) and sad because one of its own goals just failed (egocentric emotion). The egocentric and empathic emotions are determined based on a model of emotion elicitation in which the conditions of emotion elicitation are described in terms of beliefs and goals [6]. 3. The Facial Expressions Module computes the resulting facial expression that can either be simple or complex. The simple expressions are defined manually [13]. Complex facial expressions are generated basing on a face partitioning approach and on Paul Ekman’s studies [13,14,15,16]. Each facial expression is defined by a set of eight facial areas. Expression is a composition of these facial areas, each of which can display signs of emotion. For complex facial expressions, different emotions can be expressed on different areas of the face; e.g., in sadness masked by happiness, sadness is shown on the eyebrows area while happiness is displayed on the mouth area (for details see [8,9,17]). 1

The JSA framework is open-source [12].

Expressions of Empathy in ECAs

3

39

Evaluation

The main aim of this evaluation study is to verify how empathy can be expressed by an ECA. Our hypothesis is that subjects would evaluate as more appropriate the agent which expresses empathic emotions than egocentric emotion. Moreover, we want to check which type of expression: simple expression of empathic emotion or complex facial expression of egocentric and empathic emotions is more suitable. We are also interested whether these results depend on the pair of emotions used. 3.1

Scenario Set-Up

Our evaluation study consists in presenting to subjects a set of scenarios and a set of animations of facial expressions of our agent. The subjects have to choose the facial expressions which are the most appropriate to the situations described in the scenarios. Emotions. For the purpose of the experiment we wrote (in French) nine very short stories (scenarios SC1 - SC9) that were displayed to the subjects during the experiment. Each of them describes the events or situations which are intended to elicit unambiguous emotional states. In the experiment we focused on three emotional states. Our intention was to use emotions whose expressions are very different one from another so they could not be confused by the subjects. At the same time we wanted to evaluate the appropriateness of expressions of negative and positive empathic emotions. For that purpose we chose one positive emotion: joy and two very different negative emotions: anger and sadness. Scenarios. The first three scenarios (SC1 - SC3) served only to show some facial expressions of Greta to subjects that probably had not seen any expressive ECA previously. These scenarios were not considered in the final result. The evaluation part is constituted of the remaining six scenarios (SC4 - SC9). They are short dialogues between two agents: Greta and Lucie. Greta and Lucie are presented to the subjects as close friends. In each scenario they meet and tell each other about events that have happened to them recently. Each scenario concerns a different pair of emotions. Thus (at least at the beginning of the scenario) emotions elicited in Greta are different from the emotions elicited in Lucie. However, the scenarios were designed to encompass situations where the empathic emotions (can) occur. For example scenario SC4 concerns the emotions of joy (Greta) and sadness (Lucie). In this scenario the following dialog is presented to the participants of the experiment: Greta et Lucie se rencontrent dans un parc (Greta and Lucie meet in a park). Greta : Salut Lucie ! je reviens de l’Universit´e : j’ai eu l’examen ! Et toi ? (Hi Lucie! I just come back from the University: I pass the exam! and you?) Lucie : Salut Greta ! Oh, et bien moi je viens d’apprendre que je ne l’ai pas eu... (Hi Greta! I have not passed the exam...) Greta: oh!

40

R. Niewiadomski, M. Ochs, and C. Pelachaud

The six scenarios SC4 - SC9 correspond to all combinations of pairs of the three emotions: anger, joy, and sadness. For each scenario SCi there is a complementary scenario SCj in which emotions of the two protagonists (Greta and Lucie) are exchanged. Facial expressions. The output of the Emotion Module is composed of two labels of emotional states that correspond to the egocentric emotion and the empathic emotion. In order to evaluate the facial expressions of our agent we use different types of facial expressions. First of all the simple facial expressions of both emotions are used. We use also all plausible combinations of these emotions. Two types of complex facial expressions concern two emotional states at once: superposition and masking. Thus: – egocentric emotion and empathic emotion can be superposed, – egocentric emotion can be masked by a fake expression of empathic emotion, – empathic emotion can be masked by a fake expression of egocentric emotion. The last combination is impossible in the context of our evaluation study (as the scenario is built egocentric emotion cannot be fake). For this reason we use in this study only the first two complex expressions. As a consequence, for each scenario SC4 - SC9 we generated four animations each of which shows Greta displaying one facial expression. They correspond to: – EXP1 - the simple expression of emotion Ei elicited by the Greta’s event (e.g. expression of joy in SC4), – EXP2 - the simple expression of empathic emotion Ej of Greta, i.e. the emotion that corresponds to the emotion elicited by the Lucie’s event (e.g. expression of sadness in SC4), – EXP3 - the expression of superposition of emotions Ei and Ej (e.g. superposition of joy and sadness in SC4), – EXP4 - the expression of masking of Ei by Ej , (e.g. joy masked by sadness in SC4). The expressions of the type EXP3 and EXP4 were generated with the algorithm presented in Section 2. 3.2

Procedure

In our evaluation study we asked participants to choose the appropriate facial expressions of Greta for each scenario. To have access to a greater number of participants, we set up our experiment as a web application. One experiment session consists in going through 9 different web pages. Each of them presents one scenario and four videos. After reading the scenario on one web page, subjects can play the videos (see Figure 1). Participants view the videos in the order they wish. They can re-view animations at their convenience. In the six scenarios SC4 SC9 they have to order the animations from the most appropriate to the less appropriate one. Each animation as well as the scenarios SCi are displayed in a random order. The participation in the experiment was anonymous.

Expressions of Empathy in ECAs

41

Fig. 1. The four animations displayed to the subjects (scenario SC 4)

24 persons have participated in the experiment. 44% of the participants were between 20 and 29 years old and 36% were between 30 and 39 years old. 11 participants were women, the other 13 - men. 3.3

Results

We analyzed 24 sets of answers for the scenarios SC4 - SC9. Two different criteria were applied. In the first criterion, we analyzed only the frequency of the choice as the most adequate facial expression for each type of expression. After that, in the second criterion, we considered complete subjects’ answers, i.e. the order of animations. The results of both criteria are presented in Figure 2. In the criterion “the most adequate facial expression” the expressions of the type EXP2 (i.e. simple expression of empathic emotion) was most often chosen

Fig. 2. The results for scenarios SC4-SC9 using both criteria

42

R. Niewiadomski, M. Ochs, and C. Pelachaud

Fig. 3. The number of the most suitable expressions for each scenario

as the most appropriate display (33%). The expression of the type EXP3 (superposition) was chosen nearly as often as EXP2 (31%). The expression of masking (EXP4) got 26%. The simple expression of egocentric emotional state (EXP1) was less often evaluated as the most appropriate one (10%). The Friedman statistical test revealed significant difference between expressions (p