Unconsciously, humans evaluate situations based on environment and
social parameters when recognizing emotions in social interactions. Without
context, even humans may misunderstand the observed facial, vocal or body behavior.
Contextual information, such as the ongoing task (e.g., human-computer vs.
human-robot interaction), the identity (male vs. female) and natural
expressiveness of the individual (e.g., introvert vs. extrovert), as well as
the intra- and inter-personal contexts, help us to better interpret
and respond to environment around us. These considerations suggest that
attention to context information can deepen our understanding of affect
communication (e.g., discrete emotions, affective dimensions such as valence and
arousal, different types of moods and sentiment, etc.) for making reliable
real-world affect-sensitive applications.
Building upon the success of previous CBAR workshops, this 4th
workshop aims to investigate how to
efficiently exploit and model context using the cutting edge computer vision
and machine learning approaches in order to advance automatic affect
recognition. Specifically, the goal is to explore the following
questions:
- Can we leverage the contextual information such as the subject’s age and gender to improve performance of the affect recognition systems? Are the meta-data needed, or can these ‘contextual’ variables automatically and jointly be estimated with the target affect?
- How can we successfully exploit domain adaptation methods to achieve personalized affect recognition?
- Can we go beyond the standard domain adaptation to accomplish the context adaptation so as to improve the interpretation and recognition of human affect across different contexts (e.g., not only subjects but also their cultures, tasks, etc., and do so simultaneously)?
- How can we combine multiple modalities of human affect (e.g., audio, visual, and/or physiological signals) using the contextual information to successfully handle :
-Asynchrony and discordance of different modalities such as face,
head/body, voice, and heart rate.
-Innate priority among the modalities.
-Temporal variations in the relative importance of the modalities
according to the context.
- Action recognition in relation to social contexts (e.g., home, work, party, etc.): How can we integrate the influence of social contexts (e.g., individual or dyadic interactions) in automatic recognition of human affect?
- Applications:
-Context-aware clinical applications such as depression
severity measurement, pain motoring, and autism screening (e.g. the influence
of age, gender, intimate vs. stranger interaction, physician-patient
relationship, home vs. hospital environment),
-Context based and affect-aware intelligent tutors (e.g. learning profile and personality assessments)
-Context based and affect-aware intelligent tutors (e.g. learning profile and personality assessments)
There is a growing research interest of the computer vision
and machine learning community in modeling context in various vision-based
domains. While significant advances in this direction have been made for
object detection and recognition, there has been little progress in leveraging
context to improve the computer vision and machine-learning algorithms for
automatic affect recognition. This is despite a large body of cognitive
evidence emphasizing the importance of context for successful interpretation of
human affect. To this end, CVPR provides an ideal environment to gather researchers
working on different domains (from low-level image modeling for object
detection to high-level modeling of complex spatio-temporal dependencies in
human-interaction data) to share their vision on and propose novel approaches
for modeling context in affect recognition, such as: modeling human affect in group activities, its temporal reasoning, different levels of
hierarchies (from low-level image descriptors to high level interpretation of
human affect), as well as context-sensitive fusion of multiple modalities.
2. Topics
of Interest
Topics of interest
include, but are by no means limited to:
- Context-sensitive affect recognition from still images or videos
- Audio and/or physiological data modeling for context-sensitive affect recognition
- Affect tagging in images, videos or speech using context
- Modeling human-object interactions for affect recognition
- Modeling scene context for affect recognition
- Modeling social contexts for affect recognition
- Domain adaptation for context-aware affect recognition
- Deep networks for context-aware affect recognition
- Multi-modal context-aware fusion for affect recognition
- Context based corpora recording and annotation
- Theoretical and empirical analysis of influence of context on affect recognition
- Context based and affect-aware applications
We also invite
both application-driven and theoretical submissions from other related domains
focusing on modeling of context for human behavior analysis, including action
recognition and human-robot interaction, among others. Works performing
evaluation of existing context-sensitive models from other domains (such as
object detection and recognition) are also encouraged.
Title:
Consensus
Bayesian Models for Analysis of Distributed Spatio-Temporal Processes: Human
Affect, Crowds, and Beyond.
Abstract: Statistical
methods rely on compact summaries of large amounts of data to address problems
that may be difficult to tackle with geometric reasoning or physical modeling
alone. A basic premise in those settings is that of one central model (however
complex it may be) is estimated from a body of data. However, frequently the
problems one seeks to address have distributed nature: networks of cameras
(placed e.g., on mobile phones) may observe an event distributed in space and
time. Moreover, such sensors are frequently carried and controlled by human
users. The sensors offer a record of events around the user, affected or
unaffected by the user herself, her affective state as well as the social
context. Therefore, we are faced with two critical questions: 1) Is it
possible to learn a set of decentralized probabilistic models, each dedicated
to one (or a small cluster of) sensors, and yet guarantee that those models
will agree in their view of the world, making them effectively equivalent to
one centralized model? 2) How do we take into account the affective state
of the user in those models and whether and how one can efficiently estimate it
from sensory (mostly visual) data. In this talk I will answer those two
questions by reviewing the work in distributed Bayesian learning for large data
and human affect modeling in my group. This will be demonstrated on problems
such as the distributed 3D structure-from-motion, distributed matrix
completion, human emotion and pain intensity modeling, and others.
4. Submission Policy
We call
for submission of high-quality papers. The submitted manuscripts should not be
submitted to another conference or workshop. Each paper will receive at least
two reviews. Acceptance will be based on relevance to the workshop, novelty,
and technical quality.
At
least one author of each paper must register and attend the workshop to
present the paper.
We
welcome regular, position, and applications papers. The papers have to be
submitted at the following link (EasyChairCBAR2016).
Accepted papers will be included in the Proceedings of IEEE CVPR 2016 & Workshops.
There will be an Award for the best CBAR paper.
Accepted papers will be included in the Proceedings of IEEE CVPR 2016 & Workshops.
There will be an Award for the best CBAR paper.
5. Tentative
Deadlines
Submission
Deadline: April 10,
2016
Notification of Acceptance: TBD
Camera Ready: May 1, 2016
Notification of Acceptance: TBD
Camera Ready: May 1, 2016
Workshop:
July 1, 2016
6. Organizers
Zakia Hammal,
Ph.D.
The Robotics
Institute, Carnegie Mellon University
(http://www.ri.cmu.edu/)
Pittsburgh, PA, USA
Pittsburgh, PA, USA
Merlin Teodosia
Suarez, Ph.D.
Center for
Empathic Human-Computer Interactions
De La Salle
University, Manila, Philippines
Ognjen Rudovic,
Ph.D.
Intelligent
Behaviour Understanding Group
Imperial College
London,
UK
7.
Program Committee
Busso
Carlos, UT-Dallas, USA
Bianchi-Berthouze
Nadia, University College London, UK
Heylen Dirk,
University of Twente, The Netherlands
Hess Ursula, Humboldt
University, Berlin
Martinez
Aleix, The
Ohio State University, USA
Mahoor
Mohammad,
University of Denver, USA
Narayanan
Shrikanth,
University of Southern California, USA
Pavlovic
Vladimir,
Rutgers University, USA
Rodrigo Ma. Mercedes, Ateneo de Manila University
Schuller
Bjoern, Imperial College London, UK
Truong Khiet,
University of Twente, The Netherlands
Whitehill
Jacob, Harvard University, USA
Yin Lijun, Binghamton
University, USASponsors:
sewa