General Debiasing for Multimodal Sentiment Analysis
- URL: http://arxiv.org/abs/2307.10511v2
- Date: Mon, 7 Aug 2023 09:08:23 GMT
- Title: General Debiasing for Multimodal Sentiment Analysis
- Authors: Teng Sun, Juntong Ni, Wenjie Wang, Liqiang Jing, Yinwei Wei, and
Liqiang Nie
- Abstract summary: We propose a general debiasing MSA task, which aims to enhance the Out-Of-Distribution (OOD) generalization ability of MSA models.
We employ IPW to reduce the effects of large-biased samples, facilitating robust feature learning for sentiment prediction.
The empirical results demonstrate the superior generalization ability of our proposed framework.
- Score: 47.05329012210878
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing work on Multimodal Sentiment Analysis (MSA) utilizes multimodal
information for prediction yet unavoidably suffers from fitting the spurious
correlations between multimodal features and sentiment labels. For example, if
most videos with a blue background have positive labels in a dataset, the model
will rely on such correlations for prediction, while "blue background" is not a
sentiment-related feature. To address this problem, we define a general
debiasing MSA task, which aims to enhance the Out-Of-Distribution (OOD)
generalization ability of MSA models by reducing their reliance on spurious
correlations. To this end, we propose a general debiasing framework based on
Inverse Probability Weighting (IPW), which adaptively assigns small weights to
the samples with larger bias (i.e., the severer spurious correlations). The key
to this debiasing framework is to estimate the bias of each sample, which is
achieved by two steps: 1) disentangling the robust features and biased features
in each modality, and 2) utilizing the biased features to estimate the bias.
Finally, we employ IPW to reduce the effects of large-biased samples,
facilitating robust feature learning for sentiment prediction. To examine the
model's generalization ability, we keep the original testing sets on two
benchmarks and additionally construct multiple unimodal and multimodal OOD
testing sets. The empirical results demonstrate the superior generalization
ability of our proposed framework. We have released the code and data to
facilitate the reproduction https://github.com/Teng-Sun/GEAR.
Related papers
- Revisiting the Dataset Bias Problem from a Statistical Perspective [72.94990819287551]
We study the "dataset bias" problem from a statistical standpoint.
We identify the main cause of the problem as the strong correlation between a class attribute u and a non-class attribute b.
We propose to mitigate dataset bias via either weighting the objective of each sample n by frac1p(u_n|b_n) or sampling that sample with a weight proportional to frac1p(u_n|b_n).
arXiv Detail & Related papers (2024-02-05T22:58:06Z) - Debiasing Multimodal Models via Causal Information Minimization [65.23982806840182]
We study bias arising from confounders in a causal graph for multimodal data.
Robust predictive features contain diverse information that helps a model generalize to out-of-distribution data.
We use these features as confounder representations and use them via methods motivated by causal theory to remove bias from models.
arXiv Detail & Related papers (2023-11-28T16:46:14Z) - IBADR: an Iterative Bias-Aware Dataset Refinement Framework for
Debiasing NLU models [52.03761198830643]
We propose IBADR, an Iterative Bias-Aware dataset Refinement framework.
We first train a shallow model to quantify the bias degree of samples in the pool.
Then, we pair each sample with a bias indicator representing its bias degree, and use these extended samples to train a sample generator.
In this way, this generator can effectively learn the correspondence relationship between bias indicators and samples.
arXiv Detail & Related papers (2023-11-01T04:50:38Z) - SMoA: Sparse Mixture of Adapters to Mitigate Multiple Dataset Biases [27.56143777363971]
We propose a new debiasing method Sparse Mixture-of-Adapters (SMoA), which can mitigate multiple dataset biases effectively and efficiently.
Experiments on Natural Language Inference and Paraphrase Identification tasks demonstrate that SMoA outperforms full-finetuning, adapter tuning baselines, and prior strong debiasing methods.
arXiv Detail & Related papers (2023-02-28T08:47:20Z) - Delving into Identify-Emphasize Paradigm for Combating Unknown Bias [52.76758938921129]
We propose an effective bias-conflicting scoring method (ECS) to boost the identification accuracy.
We also propose gradient alignment (GA) to balance the contributions of the mined bias-aligned and bias-conflicting samples.
Experiments are conducted on multiple datasets in various settings, demonstrating that the proposed solution can mitigate the impact of unknown biases.
arXiv Detail & Related papers (2023-02-22T14:50:24Z) - General Greedy De-bias Learning [163.65789778416172]
We propose a General Greedy De-bias learning framework (GGD), which greedily trains the biased models and the base model like gradient descent in functional space.
GGD can learn a more robust base model under the settings of both task-specific biased models with prior knowledge and self-ensemble biased model without prior knowledge.
arXiv Detail & Related papers (2021-12-20T14:47:32Z) - Learning Debiased Representation via Disentangled Feature Augmentation [19.348340314001756]
This paper presents an empirical analysis revealing that training with "diverse" bias-conflicting samples is crucial for debiasing.
We propose a novel feature-level data augmentation technique in order to synthesize diverse bias-conflicting samples.
arXiv Detail & Related papers (2021-07-03T08:03:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.