Mitigating Shortcut Learning with InterpoLated Learning
- URL: http://arxiv.org/abs/2507.05527v1
- Date: Mon, 07 Jul 2025 22:49:46 GMT
- Title: Mitigating Shortcut Learning with InterpoLated Learning
- Authors: Michalis Korakakis, Andreas Vlachos, Adrian Weller,
- Abstract summary: Empirical risk minimization incentivizes models to exploit shortcuts.<n>Existing shortcut mitigation approaches are model-specific, difficult to tune, computationally expensive, and fail to improve learned representations.<n>We propose InterpoLated Learning which interpolates the representations of majority examples to include features from intra-class minority examples with shortcut-mitigating patterns.
- Score: 44.410677121415695
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Empirical risk minimization (ERM) incentivizes models to exploit shortcuts, i.e., spurious correlations between input attributes and labels that are prevalent in the majority of the training data but unrelated to the task at hand. This reliance hinders generalization on minority examples, where such correlations do not hold. Existing shortcut mitigation approaches are model-specific, difficult to tune, computationally expensive, and fail to improve learned representations. To address these issues, we propose InterpoLated Learning (InterpoLL) which interpolates the representations of majority examples to include features from intra-class minority examples with shortcut-mitigating patterns. This weakens shortcut influence, enabling models to acquire features predictive across both minority and majority examples. Experimental results on multiple natural language understanding tasks demonstrate that InterpoLL improves minority generalization over both ERM and state-of-the-art shortcut mitigation methods, without compromising accuracy on majority examples. Notably, these gains persist across encoder, encoder-decoder, and decoder-only architectures, demonstrating the method's broad applicability.
Related papers
- DBR: Divergence-Based Regularization for Debiasing Natural Language Understanding Models [50.54264918467997]
Pre-trained language models (PLMs) have achieved impressive results on various natural language processing tasks.<n>Recent research has revealed that these models often rely on superficial features and shortcuts instead of developing a genuine understanding of language.<n>We propose Divergence Based Regularization (DBR) to mitigate this shortcut learning behavior.
arXiv Detail & Related papers (2025-02-25T16:44:10Z) - Navigating the Shortcut Maze: A Comprehensive Analysis of Shortcut
Learning in Text Classification by Language Models [20.70050968223901]
This study addresses the overlooked impact of subtler, more complex shortcuts that compromise model reliability beyond oversimplified shortcuts.
We introduce a comprehensive benchmark that categorizes shortcuts into occurrence, style, and concept.
Our research systematically investigates models' resilience and susceptibilities to sophisticated shortcuts.
arXiv Detail & Related papers (2024-09-26T01:17:42Z) - Regularized Contrastive Partial Multi-view Outlier Detection [76.77036536484114]
We propose a novel method named Regularized Contrastive Partial Multi-view Outlier Detection (RCPMOD)
In this framework, we utilize contrastive learning to learn view-consistent information and distinguish outliers by the degree of consistency.
Experimental results on four benchmark datasets demonstrate that our proposed approach could outperform state-of-the-art competitors.
arXiv Detail & Related papers (2024-08-02T14:34:27Z) - Multimodal Unlearnable Examples: Protecting Data against Multimodal Contrastive Learning [53.766434746801366]
Multimodal contrastive learning (MCL) has shown remarkable advances in zero-shot classification by learning from millions of image-caption pairs crawled from the Internet.
Hackers may unauthorizedly exploit image-text data for model training, potentially including personal and privacy-sensitive information.
Recent works propose generating unlearnable examples by adding imperceptible perturbations to training images to build shortcuts for protection.
We propose Multi-step Error Minimization (MEM), a novel optimization process for generating multimodal unlearnable examples.
arXiv Detail & Related papers (2024-07-23T09:00:52Z) - Not Eliminate but Aggregate: Post-Hoc Control over Mixture-of-Experts to Address Shortcut Shifts in Natural Language Understanding [5.4480125359160265]
We propose a pessimistically aggregating the predictions of a mixture-of-experts, assuming each expert captures relatively different latent features.
The experimental results demonstrate that our post-hoc control over the experts significantly enhances the model's robustness to the distribution shift in shortcuts.
arXiv Detail & Related papers (2024-06-17T20:00:04Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - Chroma-VAE: Mitigating Shortcut Learning with Generative Classifiers [44.97660597940641]
We show that generative models alone are not sufficient to prevent shortcut learning.
In particular, we propose Chroma-VAE, a two-pronged approach where a VAE is initially trained to isolate the shortcut in a small latent subspace.
In addition to demonstrating the efficacy of Chroma-VAE on benchmark and real-world shortcut learning tasks, our work highlights the potential for manipulating the latent space of generative classifiers to isolate or interpret specific correlations.
arXiv Detail & Related papers (2022-11-28T11:27:50Z) - An Empirical Study on Robustness to Spurious Correlations using
Pre-trained Language Models [13.891423075375512]
Recent work has shown that pre-trained language models such as BERT improve robustness to spurious correlations in the dataset.
We find that the key to their success is generalization from a small amount of counterexamples where the spurious correlations do not hold.
Our results highlight the importance of data diversity for overcoming spurious correlations.
arXiv Detail & Related papers (2020-07-14T02:34:59Z) - Learning Diverse Representations for Fast Adaptation to Distribution
Shift [78.83747601814669]
We present a method for learning multiple models, incorporating an objective that pressures each to learn a distinct way to solve the task.
We demonstrate our framework's ability to facilitate rapid adaptation to distribution shift.
arXiv Detail & Related papers (2020-06-12T12:23:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.