About The Event
Novel applications of affective computing have emerged in recent years in domains ranging from health care to the 5th generation mobile network. Many of these have found improved emotion classification performance when fusing multiple sources of data (e.g., audio, video, brain, face, thermal, physiological, environmental, positional, text, etc.). Multimodal affect recognition has the potential to revolutionize the way various industries and sectors utilize information gained from recognition of a person's emotional state, particularly considering the flexibility in the choice of modalities and measurement tools (e.g., surveillance versus mobile device cameras). Multimodal classification methods have been proven highly effective at minimizing misclassification error in practice and in dynamic conditions. Further, multimodal classification models tend to be more stable over time compared to relying on a single modality, increasing their reliability in sensitive applications such as mental health monitoring and automobile driver state recognition. To continue the trend of lab to practice within the field and encourage new applications of affective computing, this workshop provides a forum for the exchange of ideas on future directions, including novel fusion methods and databases, innovations through interdisciplinary research, and emerging emotion sensing devices.
Dr. Jeffrey Cohn is Professor of Psychology, Psychiatry, and Intelligent Systems at the University of Pittsburgh and Adjunct Professor of Computer Science at the Robotics Institute at Carnegie Mellon University. He leads interdisciplinary and inter-institutional efforts to develop advanced methods of automatic analysis and synthesis of facial expression and prosody and applies those tools to research in human emotion, social development, nonverbal communication, psychopathology, and biomedicine. His research has been supported by grants from the U.S. National Institutes of Health, National Science Foundation, Autism Foundation, Office of Naval Research, and Defense Advanced Research Projects Agency among other sponsors.
Dr. Arun Ross is the John and Eva Cillag Endowed Chair in Science and Engineering, and a Professor in the Department of Computer Science and Engineering at Michigan State University. He also serves as the Site Director of the NSF Center for Identification Technology Research (CITeR). He received the B.E. (Hons.) degree in Computer Science from BITS Pilani, India, and the M.S. and PhD degrees in Computer Science and Engineering from Michigan State University. Ross was in the faculty of West Virginia University between 2003 and 2012, where he received the Benedum Distinguished Scholar Award for excellence in creative research and the WVU Foundation Outstanding Teaching Award. Ross is a recipient of the NSF CAREER Award and was designated a Kavli Fellow by the US National Academy of Sciences in 2006. He received the JK Aggarwal Prize in 2014 and the Young Biometrics Investigator Award in 2013 from the International Association of Pattern Recognition (IAPR). Ross' research interests include biometrics, computer vision and machine learning. He has advocated for the responsible use of biometrics in multiple forums including the NATO Advanced Research Workshop on Identity and Security in Switzerland in 2018. He is currently working with his research team on areas related to biometrics security and privacy, image forensics, cross-spectral biometrics, autonomous vehicles, and interpretable machine learning models.
|12:00-12:05||Opening Remarks||Canavan et al.|
|12:05-12:50||Keynote 1||Dr. Jeff Cohn|
|12:50-1:00||Nonverbal Behavioral Patterns Predict Social Rejection Elicited Aggression||Quarmley et al.|
|1:00-1:10||Learning a Privacy-Preserving Global Feature Set for Mood Classification Using Smartphone Activity and Sensor Data||King et al.|
|1:10-1:20||A Computational View of the Emotional Regulation of Disgust using Multimodal Sensors||Kota et al.|
|1:30-2:15|| Keynote: Privacy of Face Images Using Semi-Adversarial AutoEncoders |
This talk will focus on enhancing the privacy of individuals whose face images are acquired for a specific purpose such as biometric recognition or affective computing. The goal is to alter the face images such that the performance of certain attribute classifiers is negatively impacted while other classifiers are minimally impacted. For example, by strategically perturbing face images, the performance of automated gender classifiers can be confounded (so gender attribute is subdued), while the perturbed images can still be used for biometric recognition by face matchers (so biometric utility is retained). We implement this type of a “semi-adversarial” task by training an auto-encoder that is judiciously supervised by attribute classifiers. The ensuing face autoencoder is referred to as "PrivacyNet" due to its ability to impart controllable privacy to face images. The talk will also present results on multiple face datasets indicating the pros and cons of the proposed approach.
|Dr. Arun Ross|
|2:15-2:25||Multimodal Fusion of Physiological Signals and Facial Action Units for Pain Recognition||Hinduja et al.|
|2:25-2:35||Summarizing Driving Behavior to Support Driver Stress Analysis||Ortoncelli et al.|
|2:35-2:45||Mood Versus Identity: Studying the Influence of Affective States on Mobile Biometrics||Neal et al.|
|2:45-2:55||iSAM: Interactively Fine-Tuning an Affective Model of Immersive Media to Player Nuances||Elor et al.|
|2:55-3:00||Closing Remarks||Canavan et al.|
Call for Papers
AMAR 2020 gathers researchers working in the areas of affective computing, human-computer interaction, brain-computer interfaces, mental and digital health, behavioral sciences, cybersecurity, and other disciplines which have or can leverage automated emotion recognition through the fusion of multiple modalities. This workshop will discuss various ubiquitous sensing devices (e.g., brain, face, thermal, physiological, environmental, positional, etc.) to decode emotions in ways relevant to specific applications and domains. This workshop aims to expose current use cases for affective computing and emerging applications of affective computing to spark future work.Topics of interest include but are not limited to:
- Health applications with a focus on multimodal affect
- Multimodal affective computing for cybersecurity applications (e.g., biometrics and IoT security)
- Inter-correlations and fusion of ubiquitous multimodal data as it relates to applied emotion recognition (e.g. face and EEG data)
- Leveraging ubiquitous devices to create reliable multimodal applications for emotion recognition
- Applications of in-the-wild data vs. lab controlled
- Facilitation and collection of multimodal data (e.g. ubiquitous data) for applied emotion recognition
- Engineering applications of multimodal affect (e.g., robotics, social engineering, domain inspired hardware / sensing technologies, etc.)
Among those listed in the topics of interest, additional applications include but are not limited to:
- Quantified-Self and Self-regulation
- Engagement Measurement
- Lie Detection
- Smart human-machine interfaces
- Intelligent transportation systems
- Video games
- Immersed Virtual Experiences
Workshop candidates are invited to submit papers up to 4 pages plus one for references. Submissions to AMAR 2020 should have no substantial overlap with any other paper submitted to FG2020 or already published. All persons who have made any substantial contribution to the work should be listed as authors, and all listed authors should have made some substantial contribution to the work. Papers presented at AMAR 2020 will appear in the IEEE Xplore digital library. Papers should follow the FG conference format (anonymous). Paper submissions will be handled using EasyChair (click here for more details on submitting your paper to AMAR2020).
Decision to Authors:
Camera-ready papers due: