The face is the key element to convey emotion and plays an important role in verbal and non-verbal communication. Many efforts have been done to teach people with Autism Spectrum Disorders (ASD) to recognize facial expressions with varying results, but none focused on using real time facial synthesis. Most methodologies use Paul Ekman's approach based on photographs of facial expressions. Besides having severely limited interactivity, they fail to reproduce the dynamics of a facial expression: far from being a still image, it is the voluntary and involuntary contraction of muscles that produce different facial movements. These movements convey motions from one individual to another, enabling non-verbal communication. Thus, we need to weigh in an additional teaching method that allows facial motion. Then, it is necessary to study techniques that will allow real-time facial synthesis. T-LIFE is designed to assist people with ASD to recognize facial expressions in a playful way. The key technological contributions of this project are: a real-time facial markerless motion capture system, a new facial expression analyzer and classification method and an immersive interaction model.
- Verónica Orvalho, Science Faculty of the University of Porto, Portugal
- Project introduction
- Autism Spectrum Disorder (ASD) overview
- How technology can help people with ASD
- Hands-on: experiment with the participants how to recognize facial expressions
- Project discussion