About The Workshop
Novel applications of affective computing have emerged in recent years in domains ranging from health care to the 5th generation mobile network. Many of these have found improved emotion classification performance when fusing multiple sources of data (e.g., audio, video, brain, face, thermal, physiological, environmental, positional, text, etc.). Multimodal affect recognition has the potential to revolutionize the way various industries and sectors utilize information gained from recognition of a person's emotional state, particularly considering the flexibility in the choice of modalities and measurement tools (e.g., surveillance versus mobile device cameras). Multimodal classification methods have been proven highly effective at minimizing misclassification error in practice and in dynamic conditions. Further, multimodal classification models tend to be more stable over time compared to relying on a single modality, increasing their reliability in sensitive applications such as mental health monitoring and automobile driver state recognition. To continue the trend of lab to practice within the field and encourage new applications of affective computing, this workshop provides a forum for researchers to exchange ideas on future directions, including novel fusion methods and databases, innovations through interdisciplinary research, and emerging emotion sensing devices. Also, this workshop places a focus on the ethical use of novel applications of affective computing in real world scenarios. More specifically, it welcomes discussions on topics including, but not limited to, privacy, manipulation of users, and public fears and misconceptions regarding affective computing. It is expected that the affective computing market will grow from $28.6 billion to $140 billion by 2025. This significant growth will allow for new applications into affective computing that include, but are not limited to, health monitoring systems, diagnosis and treatment of disorders such as Autism Spectrum Disorder, and home entertainment (e.g., video games). To improve these affective systems, there are many ethical concerns to be considered. This workshop seeks to explore the intersection between theory and ethical applications of affective computing, with a specific focus on multimodal data for affect recognition (e.g., expression, and physiological signals).
Dr. Ehsan Hoque is an Associate Professor of Computer Science at the University of Rochester. From January 2018 to June 2019, he was the Interim Director of the Goergen Institute for Data Science. He co-lead the Rochester Human-Computer Interaction (ROC HCI) Group. He received his PhD from Massachusetts Institute of Technology in 2013.
More details coming soon.
Dr. Mohamed Daoudi is a Full Professor of Computer Science at IMT Lille Douai and the head of Image group at CRIStAL Laboratory. He received his Ph.D. degree in Computer Engineering from the University of Lille (France) in 1993. His research interests include computer vision, pattern recognition, face and facial expression recognition, and action recognition. He is Associate Editor of Elsevier Journal of IVC. He is a Co-General Chair of IEEE FG 2019. He is Fellow of IAPR and IEEE Senior member.
More details coming soon.
Dr. Michel Valstar is a Professor of Computer Science and the University of Nottingham. He is a researcher in Automatic Visual Understanding of Human Behaviour. This encompasses Machine Learning, Computer Vision, and a good idea of how people behave in this world.
More details coming soon.
Tentative Workshop Schedule
|9:00-9:10||Welcome and Opening Remarks||Organizers|
|9:10-9:30||Keynote 1||Dr. Ehsan Hoque|
|9:30-10:00||Workshop Paper Presentations (Session 1)||Authors|
|10:00-10:20||Keynote 2||Dr. Mohamed Daoudi|
|10:20-10:50||Workshop Paper Presentations (Session 2)||Authors|
|10:50-11:10||Keynote 3||Dr. Michel Valstar|
|11:10-11:20||Defining Topics for Breakout Groups||All|
|11:20-12:05||Breakout Group Discussions||All|
|12:05-12:40||Group Reporting of Breakout Discussions||All|
|12:40-1:00||Summary and Closing Remarks||Organizers|
Call for Papers
To investigate ethical, applied affect recognition, this workshop will leverage multimodal data that includes, but is not limited to, 2D, 3D, thermal, brain, physiological, and mobile sensor signals. This workshop aims to expose current use cases for affective computing and emerging applications of affective computing to spark future work. Along with this, this workshop has a specific focus on the ethical considerations of such work, including how to mitigate ethical concerns. Considering this, topics of the workshop will focus on questions including, but not limited to:
- What inter-correlations exist between facial affect (e.g. expression) and other modalities (e.g. EEG)?
- How can multimodal data be leveraged to create real-world applications of affect recognition such as prediction of stress, real-time ubiquitous emotion recognition, and impact of mood on ubiquitous subject identification?
- How can we facilitate the collection of multimodal data for applied affect recognition?
- What are the ethical implications of working on such questions?
- How can we mitigate the ethical concerns that such work produces?
- Can we positively address public fears and misconceptions regarding applied affective computing?
- Health applications with a focus on multimodal affect
- Multimodal affective computing for cybersecurity applications (e.g., biometrics and IoT security)
- Inter-correlations and fusion of ubiquitous multimodal data as it relates to applied emotion recognition (e.g. face and EEG data)
- Leveraging ubiquitous devices to create reliable multimodal applications for emotion recognition
- Applications of in-the-wild data vs. lab controlled
- Facilitation and collection of multimodal data (e.g. ubiquitous data) for applied emotion recognition
- Engineering applications of multimodal affect (e.g., robotics, social engineering, domain inspired hardware / sensing technologies, etc.)
- Privacy and security
- Institutionalized bias
- Trustworthy applications of affective computing
- Equal access to ethical applications of affective computing (e.g. medical applications inaccessible due to wealth inequality)
Workshop candidates are invited to submit papers up to 4 pages plus one for references in the ACII format. Submissions to AMAR 2021 should have no substantial overlap with any other paper submitted to ACII2021 or already published. All persons who have made any substantial contribution to the work should be listed as authors (in the accepted version), and all listed authors should have made some substantial contribution to the work. Papers presented at AMAR 2021 will appear in the IEEE Xplore digital library. Papers should follow the ACII conference format (anonymous).
How to Submit:
Paper submissions will be handled using EasyChair. Select the "ACII 2021 Workshop - Applied Multimodal Affect Recognition" track. The reviewing process will be double blind. Authors should remove author and institutional identities from the title and header areas of the paper. There should also be no acknowledgments. Authors can leave citations to their previous work unanonymized so that reviewers can ensure that all previous research has been taken into account. However, they should cite their own work in the third person (e.g., " found that…"). At least one author of each accepted paper will be required to attend the workshop to present their work.
Paper submission: June 30, 2021
Decision to Authors: July 14, 2021
Camera-ready papers due: July 28, 2021
Workshop: September 28, 2021