Context De-confounded Emotion Recognition
- URL: http://arxiv.org/abs/2303.11921v2
- Date: Sun, 26 Mar 2023 07:18:05 GMT
- Title: Context De-confounded Emotion Recognition
- Authors: Dingkang Yang, Zhaoyu Chen, Yuzheng Wang, Shunli Wang, Mingcheng Li,
Siao Liu, Xiao Zhao, Shuai Huang, Zhiyan Dong, Peng Zhai, Lihua Zhang
- Abstract summary: Context-Aware Emotion Recognition (CAER) aims to perceive the emotional states of the target person with contextual information.
A long-overlooked issue is that a context bias in existing datasets leads to a significantly unbalanced distribution of emotional states.
This paper provides a causality-based perspective to disentangle the models from the impact of such bias, and formulate the causalities among variables in the CAER task.
- Score: 12.037240778629346
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Context-Aware Emotion Recognition (CAER) is a crucial and challenging task
that aims to perceive the emotional states of the target person with contextual
information. Recent approaches invariably focus on designing sophisticated
architectures or mechanisms to extract seemingly meaningful representations
from subjects and contexts. However, a long-overlooked issue is that a context
bias in existing datasets leads to a significantly unbalanced distribution of
emotional states among different context scenarios. Concretely, the harmful
bias is a confounder that misleads existing models to learn spurious
correlations based on conventional likelihood estimation, significantly
limiting the models' performance. To tackle the issue, this paper provides a
causality-based perspective to disentangle the models from the impact of such
bias, and formulate the causalities among variables in the CAER task via a
tailored causal graph. Then, we propose a Contextual Causal Intervention Module
(CCIM) based on the backdoor adjustment to de-confound the confounder and
exploit the true causal effect for model training. CCIM is plug-in and
model-agnostic, which improves diverse state-of-the-art approaches by
considerable margins. Extensive experiments on three benchmark datasets
demonstrate the effectiveness of our CCIM and the significance of causal
insight.
Related papers
- DAG-aware Transformer for Causal Effect Estimation [0.8192907805418583]
Causal inference is a critical task across fields such as healthcare, economics, and the social sciences.
In this paper, we present a novel transformer-based method for causal inference that overcomes these challenges.
The core innovation of our model lies in its integration of causal Directed Acyclic Graphs (DAGs) directly into the attention mechanism.
arXiv Detail & Related papers (2024-10-13T23:17:58Z) - Towards Context-Aware Emotion Recognition Debiasing from a Causal Demystification Perspective via De-confounded Training [14.450673163785094]
Context-Aware Emotion Recognition (CAER) provides valuable semantic cues for recognizing the emotions of target persons.
Current approaches invariably focus on designing sophisticated structures to extract perceptually critical representations from contexts.
We present a Contextual Causal Intervention Module (CCIM) to de-confound the confounder.
arXiv Detail & Related papers (2024-07-06T05:29:02Z) - Revisiting Spurious Correlation in Domain Generalization [12.745076668687748]
We build a structural causal model (SCM) to describe the causality within data generation process.
We further conduct a thorough analysis of the mechanisms underlying spurious correlation.
In this regard, we propose to control confounding bias in OOD generalization by introducing a propensity score weighted estimator.
arXiv Detail & Related papers (2024-06-17T13:22:00Z) - Robust Emotion Recognition in Context Debiasing [12.487614699507793]
Context-aware emotion recognition (CAER) has recently boosted the practical applications of affective computing techniques in unconstrained environments.
Despite advancements, the biggest challenge remains due to context bias interference.
We propose a counterfactual emotion inference (CLEF) framework to address the above issue.
arXiv Detail & Related papers (2024-03-09T17:05:43Z) - Decoding Susceptibility: Modeling Misbelief to Misinformation Through a Computational Approach [61.04606493712002]
Susceptibility to misinformation describes the degree of belief in unverifiable claims that is not observable.
Existing susceptibility studies heavily rely on self-reported beliefs.
We propose a computational approach to model users' latent susceptibility levels.
arXiv Detail & Related papers (2023-11-16T07:22:56Z) - Causal Triplet: An Open Challenge for Intervention-centric Causal
Representation Learning [98.78136504619539]
Causal Triplet is a causal representation learning benchmark featuring visually more complex scenes.
We show that models built with the knowledge of disentangled or object-centric representations significantly outperform their distributed counterparts.
arXiv Detail & Related papers (2023-01-12T17:43:38Z) - CausalDialogue: Modeling Utterance-level Causality in Conversations [83.03604651485327]
We have compiled and expanded upon a new dataset called CausalDialogue through crowd-sourcing.
This dataset includes multiple cause-effect pairs within a directed acyclic graph (DAG) structure.
We propose a causality-enhanced method called Exponential Average Treatment Effect (ExMATE) to enhance the impact of causality at the utterance level in training neural conversation models.
arXiv Detail & Related papers (2022-12-20T18:31:50Z) - Counterfactual Reasoning for Out-of-distribution Multimodal Sentiment
Analysis [56.84237932819403]
This paper aims to estimate and mitigate the bad effect of textual modality for strong OOD generalization.
Inspired by this, we devise a model-agnostic counterfactual framework for multimodal sentiment analysis.
arXiv Detail & Related papers (2022-07-24T03:57:40Z) - Generalizable Information Theoretic Causal Representation [37.54158138447033]
We propose to learn causal representation from observational data by regularizing the learning procedure with mutual information measures according to our hypothetical causal graph.
The optimization involves a counterfactual loss, based on which we deduce a theoretical guarantee that the causality-inspired learning is with reduced sample complexity and better generalization ability.
arXiv Detail & Related papers (2022-02-17T00:38:35Z) - Confounder Identification-free Causal Visual Feature Learning [84.28462256571822]
We propose a novel Confounder Identification-free Causal Visual Feature Learning (CICF) method, which obviates the need for identifying confounders.
CICF models the interventions among different samples based on front-door criterion, and then approximates the global-scope intervening effect upon the instance-level interventions.
We uncover the relation between CICF and the popular meta-learning strategy MAML, and provide an interpretation of why MAML works from the theoretical perspective.
arXiv Detail & Related papers (2021-11-26T10:57:47Z) - Towards Unbiased Visual Emotion Recognition via Causal Intervention [63.74095927462]
We propose a novel Emotion Recognition Network (IERN) to alleviate the negative effects brought by the dataset bias.
A series of designed tests validate the effectiveness of IERN, and experiments on three emotion benchmarks demonstrate that IERN outperforms other state-of-the-art approaches.
arXiv Detail & Related papers (2021-07-26T10:40:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.