Spurious Feature Diversification Improves Out-of-distribution Generalization
- URL: http://arxiv.org/abs/2309.17230v2
- Date: Sun, 14 Jul 2024 08:02:49 GMT
- Title: Spurious Feature Diversification Improves Out-of-distribution Generalization
- Authors: Yong Lin, Lu Tan, Yifan Hao, Honam Wong, Hanze Dong, Weizhong Zhang, Yujiu Yang, Tong Zhang,
- Abstract summary: Generalization to out-of-distribution (OOD) data is a critical challenge in machine learning.
We study WiSE-FT, a popular weight space ensemble method that interpolates between a pre-trained and a fine-tuned model.
We observe an unexpected FalseFalseTrue" phenomenon, in which WiSE-FT successfully corrects many cases where each individual model makes incorrect predictions.
- Score: 43.84284578270031
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generalization to out-of-distribution (OOD) data is a critical challenge in machine learning. Ensemble-based methods, like weight space ensembles that interpolate model parameters, have been shown to achieve superior OOD performance. However, the underlying mechanism for their effectiveness remains unclear. In this study, we closely examine WiSE-FT, a popular weight space ensemble method that interpolates between a pre-trained and a fine-tuned model. We observe an unexpected ``FalseFalseTrue" phenomenon, in which WiSE-FT successfully corrects many cases where each individual model makes incorrect predictions, which contributes significantly to its OOD effectiveness. To gain further insights, we conduct theoretical analysis in a multi-class setting with a large number of spurious features. Our analysis predicts the above phenomenon and it further shows that ensemble-based models reduce prediction errors in the OOD settings by utilizing a more diverse set of spurious features. Contrary to the conventional wisdom that focuses on learning invariant features for better OOD performance, our findings suggest that incorporating a large number of diverse spurious features weakens their individual contributions, leading to improved overall OOD generalization performance. Additionally, our findings provide the first explanation for the mysterious phenomenon of weight space ensembles outperforming output space ensembles in OOD. Empirically we demonstrate the effectiveness of utilizing diverse spurious features on a MultiColorMNIST dataset, and our experimental results are consistent with the theoretical analysis. Building upon the new theoretical insights into the efficacy of ensemble methods, we further propose a novel averaging method called BAlaNced averaGing (BANG) which significantly enhances the OOD performance of WiSE-FT.
Related papers
- Improving Network Interpretability via Explanation Consistency Evaluation [56.14036428778861]
We propose a framework that acquires more explainable activation heatmaps and simultaneously increase the model performance.
Specifically, our framework introduces a new metric, i.e., explanation consistency, to reweight the training samples adaptively in model learning.
Our framework then promotes the model learning by paying closer attention to those training samples with a high difference in explanations.
arXiv Detail & Related papers (2024-08-08T17:20:08Z) - See Further for Parameter Efficient Fine-tuning by Standing on the Shoulders of Decomposition [56.87609859444084]
parameter-efficient fine-tuning (PEFT) focuses on optimizing a select subset of parameters while keeping the rest fixed, significantly lowering computational and storage overheads.
We take the first step to unify all approaches by dissecting them from a decomposition perspective.
We introduce two novel PEFT methods alongside a simple yet effective framework designed to enhance the performance of PEFT techniques across various applications.
arXiv Detail & Related papers (2024-07-07T15:44:42Z) - Towards Calibrated Robust Fine-Tuning of Vision-Language Models [97.19901765814431]
This work proposes a robust fine-tuning method that improves both OOD accuracy and confidence calibration simultaneously in vision language models.
We show that both OOD classification and OOD calibration errors have a shared upper bound consisting of two terms of ID data.
Based on this insight, we design a novel framework that conducts fine-tuning with a constrained multimodal contrastive loss enforcing a larger smallest singular value.
arXiv Detail & Related papers (2023-11-03T05:41:25Z) - Mitigating Simplicity Bias in Deep Learning for Improved OOD
Generalization and Robustness [5.976013616522926]
We propose a framework that encourages the model to use a more diverse set of features to make predictions.
We first train a simple model, and then regularize the conditional mutual information with respect to it to obtain the final model.
We demonstrate the effectiveness of this framework in various problem settings and real-world applications.
arXiv Detail & Related papers (2023-10-09T21:19:39Z) - Adaptive Contextual Perception: How to Generalize to New Backgrounds and
Ambiguous Objects [75.15563723169234]
We investigate how vision models adaptively use context for out-of-distribution generalization.
We show that models that excel in one setting tend to struggle in the other.
To replicate the generalization abilities of biological vision, computer vision models must have factorized object vs. background representations.
arXiv Detail & Related papers (2023-06-09T15:29:54Z) - Understanding and Improving Feature Learning for Out-of-Distribution
Generalization [41.06375309780553]
We propose Feature Augmented Training (FeAT) to enforce the model to learn richer features ready for OOD generalization.
FeAT iteratively augments the model to learn new features while retaining the already learned features.
Experiments show that FeAT effectively learns richer features thus boosting the performance of various OOD objectives.
arXiv Detail & Related papers (2023-04-22T05:57:00Z) - Joint Training of Deep Ensembles Fails Due to Learner Collusion [61.557412796012535]
Ensembles of machine learning models have been well established as a powerful method of improving performance over a single model.
Traditionally, ensembling algorithms train their base learners independently or sequentially with the goal of optimizing their joint performance.
We show that directly minimizing the loss of the ensemble appears to rarely be applied in practice.
arXiv Detail & Related papers (2023-01-26T18:58:07Z) - Improving Out-of-Distribution Generalization by Adversarial Training
with Structured Priors [17.936426699670864]
We show that sample-wise Adversarial Training (AT) has limited improvement on Out-of-Distribution (OOD) generalization.
We propose two AT variants with low-rank structures to train OOD-robust models.
Our proposed approaches outperform Empirical Risk Minimization (ERM) and sample-wise AT.
arXiv Detail & Related papers (2022-10-13T07:37:42Z) - Models Out of Line: A Fourier Lens on Distribution Shift Robustness [29.12208822285158]
Improving accuracy of deep neural networks (DNNs) on out-of-distribution (OOD) data is critical to an acceptance of deep learning (DL) in real world applications.
Recently, some promising approaches have been developed to improve OOD robustness.
There still is no clear understanding of the conditions on OOD data and model properties that are required to observe effective robustness.
arXiv Detail & Related papers (2022-07-08T18:05:58Z) - Demarcating Endogenous and Exogenous Opinion Dynamics: An Experimental
Design Approach [27.975266406080152]
In this paper, we design a suite of unsupervised classification methods based on experimental design approaches.
We aim to select the subsets of events which minimize different measures of mean estimation error.
Our experiments range from validating prediction performance on unsanitized and sanitized events to checking the effect of selecting optimal subsets of various sizes.
arXiv Detail & Related papers (2021-02-11T11:38:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.