Invariant Language Modeling
- URL: http://arxiv.org/abs/2110.08413v1
- Date: Sat, 16 Oct 2021 00:03:19 GMT
- Title: Invariant Language Modeling
- Authors: Maxime Peyrard, Sarvjeet Singh Ghotra, Martin Josifoski, Vidhan
Agarwal, Barun Patra, Dean Carignan, Emre Kiciman, Robert West
- Abstract summary: We propose a framework for learning invariant representations that generalize better across multiple environments.
In particular, we adapt a game-theoretic implementation of IRM (IRM-games) to language models.
We demonstrate the ability of our method to (i) remove structured noise, (ii) ignore specific spurious correlations without affecting global performance, and (iii) achieve better out-of-domain generalization.
- Score: 23.096265183487034
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern pretrained language models are critical components of NLP pipelines.
Yet, they suffer from spurious correlations, poor out-of-domain generalization,
and biases. Inspired by recent progress in causal machine learning, in
particular the invariant risk minimization (IRM) paradigm, we propose invariant
language modeling, a framework for learning invariant representations that
generalize better across multiple environments. In particular, we adapt a
game-theoretic implementation of IRM (IRM-games) to language models, where the
invariance emerges from a specific training schedule in which all the
environments compete to optimize their own environment-specific loss by
updating subsets of the model in a round-robin fashion. In a series of
controlled experiments, we demonstrate the ability of our method to (i) remove
structured noise, (ii) ignore specific spurious correlations without affecting
global performance, and (iii) achieve better out-of-domain generalization.
These benefits come with a negligible computational overhead compared to
standard training, do not require changing the local loss, and can be applied
to any language model architecture. We believe this framework is promising to
help mitigate spurious correlations and biases in language models.
Related papers
- SALAD: Improving Robustness and Generalization through Contrastive Learning with Structure-Aware and LLM-Driven Augmented Data [15.366930934639838]
We propose SALAD, a novel approach to enhance model robustness and generalization.
Our method generates structure-aware and counterfactually augmented data for contrastive learning.
We validate our approach through experiments on three tasks: Sentiment Classification, Sexism Detection, and Natural Language Inference.
arXiv Detail & Related papers (2025-04-16T15:40:10Z) - EqualizeIR: Mitigating Linguistic Biases in Retrieval Models [14.755831733659699]
Existing information retrieval (IR) models show significant biases based on the linguistic complexity of input queries.
We propose EqualizeIR, a framework to mitigate linguistic biases in IR models.
arXiv Detail & Related papers (2025-03-22T03:24:34Z) - DBR: Divergence-Based Regularization for Debiasing Natural Language Understanding Models [50.54264918467997]
Pre-trained language models (PLMs) have achieved impressive results on various natural language processing tasks.
Recent research has revealed that these models often rely on superficial features and shortcuts instead of developing a genuine understanding of language.
We propose Divergence Based Regularization (DBR) to mitigate this shortcut learning behavior.
arXiv Detail & Related papers (2025-02-25T16:44:10Z) - Mitigating Catastrophic Forgetting in Language Transfer via Model Merging [16.845734486667226]
Branch-and-Merge (BaM) is a new adaptation method based on iteratively merging multiple models.
BaM is based on the insight that this yields lower magnitude but higher quality weight changes.
We demonstrate in an empirical study on Bulgarian and German that BaM can significantly reduce forgetting while matching or even improving target domain performance.
arXiv Detail & Related papers (2024-07-11T17:32:40Z) - Effective internal language model training and fusion for factorized transducer model [26.371223360905557]
Internal language model (ILM) of the neural transducer has been widely studied.
We propose a novel ILM training and decoding strategy for factorized transducer models.
arXiv Detail & Related papers (2024-04-02T08:01:05Z) - A Simple Recipe for Language-guided Domain Generalized Segmentation [45.93202559299953]
Generalization to new domains not seen during training is one of the long-standing challenges in deploying neural networks in real-world applications.
We introduce a simple framework for generalizing semantic segmentation networks by employing language as the source of randomization.
Our recipe comprises three key ingredients: (i) the preservation of the intrinsic CLIP robustness through minimal fine-tuning, (ii) language-driven local style augmentation, and (iii) randomization by locally mixing the source and augmented styles during training.
arXiv Detail & Related papers (2023-11-29T18:59:59Z) - Rethinking Masked Language Modeling for Chinese Spelling Correction [70.85829000570203]
We study Chinese Spelling Correction (CSC) as a joint decision made by two separate models: a language model and an error model.
We find that fine-tuning BERT tends to over-fit the error model while under-fit the language model, resulting in poor generalization to out-of-distribution error patterns.
We demonstrate that a very simple strategy, randomly masking 20% non-error tokens from the input sequence during fine-tuning is sufficient for learning a much better language model without sacrificing the error model.
arXiv Detail & Related papers (2023-05-28T13:19:12Z) - Learning Optimal Features via Partial Invariance [18.552839725370383]
Invariant Risk Minimization (IRM) is a popular framework that aims to learn robust models from multiple environments.
We show that IRM can over-constrain the predictor and to remedy this, we propose a relaxation via $textitpartial invariance$.
Several experiments, conducted both in linear settings as well as with deep neural networks on tasks over both language and image data, allow us to verify our conclusions.
arXiv Detail & Related papers (2023-01-28T02:48:14Z) - Meta-Causal Feature Learning for Out-of-Distribution Generalization [71.38239243414091]
This paper presents a balanced meta-causal learner (BMCL), which includes a balanced task generation module (BTG) and a meta-causal feature learning module (MCFL)
BMCL effectively identifies the class-invariant visual regions for classification and may serve as a general framework to improve the performance of the state-of-the-art methods.
arXiv Detail & Related papers (2022-08-22T09:07:02Z) - Improving Pre-trained Language Model Fine-tuning with Noise Stability
Regularization [94.4409074435894]
We propose a novel and effective fine-tuning framework, named Layerwise Noise Stability Regularization (LNSR)
Specifically, we propose to inject the standard Gaussian noise and regularize hidden representations of the fine-tuned model.
We demonstrate the advantages of the proposed method over other state-of-the-art algorithms including L2-SP, Mixout and SMART.
arXiv Detail & Related papers (2022-06-12T04:42:49Z) - Distributionally Robust Recurrent Decoders with Random Network
Distillation [93.10261573696788]
We propose a method based on OOD detection with Random Network Distillation to allow an autoregressive language model to disregard OOD context during inference.
We apply our method to a GRU architecture, demonstrating improvements on multiple language modeling (LM) datasets.
arXiv Detail & Related papers (2021-10-25T19:26:29Z) - Distributionally Robust Multilingual Machine Translation [94.51866646879337]
We propose a new learning objective for Multilingual neural machine translation (MNMT) based on distributionally robust optimization.
We show how to practically optimize this objective for large translation corpora using an iterated best response scheme.
Our method consistently outperforms strong baseline methods in terms of average and per-language performance under both many-to-one and one-to-many translation settings.
arXiv Detail & Related papers (2021-09-09T03:48:35Z) - Evaluating the Robustness of Neural Language Models to Input
Perturbations [7.064032374579076]
In this study, we design and implement various types of character-level and word-level perturbation methods to simulate noisy input texts.
We investigate the ability of high-performance language models such as BERT, XLNet, RoBERTa, and ELMo in handling different types of input perturbations.
The results suggest that language models are sensitive to input perturbations and their performance can decrease even when small changes are introduced.
arXiv Detail & Related papers (2021-08-27T12:31:17Z) - Invariant Causal Prediction for Block MDPs [106.63346115341862]
Generalization across environments is critical to the successful application of reinforcement learning algorithms to real-world challenges.
We propose a method of invariant prediction to learn model-irrelevance state abstractions (MISA) that generalize to novel observations in the multi-environment setting.
arXiv Detail & Related papers (2020-03-12T21:03:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.