Not Only the Last-Layer Features for Spurious Correlations: All Layer Deep Feature Reweighting
- URL: http://arxiv.org/abs/2409.14637v1
- Date: Mon, 23 Sep 2024 00:31:39 GMT
- Title: Not Only the Last-Layer Features for Spurious Correlations: All Layer Deep Feature Reweighting
- Authors: Humza Wajid Hameed, Geraldin Nanfack, Eugene Belilovsky,
- Abstract summary: A powerful approach to combat spurious correlations is to re-train the last layer on a balanced validation dataset.
Key attributes can sometimes be discarded by neural networks towards the last layer.
In this work, we consider retraining a classifier on a set of features derived from all layers.
- Score: 9.141594510823799
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spurious correlations are a major source of errors for machine learning models, in particular when aiming for group-level fairness. It has been recently shown that a powerful approach to combat spurious correlations is to re-train the last layer on a balanced validation dataset, isolating robust features for the predictor. However, key attributes can sometimes be discarded by neural networks towards the last layer. In this work, we thus consider retraining a classifier on a set of features derived from all layers. We utilize a recently proposed feature selection strategy to select unbiased features from all the layers. We observe this approach gives significant improvements in worst-group accuracy on several standard benchmarks.
Related papers
- Deep learning from strongly mixing observations: Sparse-penalized regularization and minimax optimality [0.0]
We consider sparse-penalized regularization for deep neural network predictor.
We deal with the squared and a broad class of loss functions.
arXiv Detail & Related papers (2024-06-12T15:21:51Z) - Minimizing Chebyshev Prototype Risk Magically Mitigates the Perils of Overfitting [1.6574413179773757]
We develop multicomponent loss functions that reduce intra-class feature correlation and maximize inter-class feature distance.
We implement the terms of the Chebyshev Prototype Risk (CPR) bound into our Explicit CPR loss function.
Our training algorithm reduces overfitting and improves upon previous approaches in many settings.
arXiv Detail & Related papers (2024-04-10T15:16:04Z) - Universal Consistency of Wide and Deep ReLU Neural Networks and Minimax
Optimal Convergence Rates for Kolmogorov-Donoho Optimal Function Classes [7.433327915285969]
We prove the universal consistency of wide and deep ReLU neural network classifiers trained on the logistic loss.
We also give sufficient conditions for a class of probability measures for which classifiers based on neural networks achieve minimax optimal rates of convergence.
arXiv Detail & Related papers (2024-01-08T23:54:46Z) - Hierarchical Simplicity Bias of Neural Networks [0.0]
We introduce a novel method called imbalanced label coupling to explore and extend this simplicity bias across hierarchical levels.
Our approach demonstrates that trained networks sequentially consider features of increasing complexity based on their correlation with labels in the training set.
arXiv Detail & Related papers (2023-11-05T11:27:03Z) - Deep Imbalanced Regression via Hierarchical Classification Adjustment [50.19438850112964]
Regression tasks in computer vision are often formulated into classification by quantizing the target space into classes.
The majority of training samples lie in a head range of target values, while a minority of samples span a usually larger tail range.
We propose to construct hierarchical classifiers for solving imbalanced regression tasks.
Our novel hierarchical classification adjustment (HCA) for imbalanced regression shows superior results on three diverse tasks.
arXiv Detail & Related papers (2023-10-26T04:54:39Z) - Layer-wise Linear Mode Connectivity [52.6945036534469]
Averaging neural network parameters is an intuitive method for the knowledge of two independent models.
It is most prominently used in federated learning.
We analyse the performance of the models that result from averaging single, or groups.
arXiv Detail & Related papers (2023-07-13T09:39:10Z) - Learning an Invertible Output Mapping Can Mitigate Simplicity Bias in
Neural Networks [66.76034024335833]
We investigate why diverse/ complex features are learned by the backbone, and their brittleness is due to the linear classification head relying primarily on the simplest features.
We propose Feature Reconstruction Regularizer (FRR) to ensure that the learned features can be reconstructed back from the logits.
We demonstrate up to 15% gains in OOD accuracy on the recently introduced semi-synthetic datasets with extreme distribution shifts.
arXiv Detail & Related papers (2022-10-04T04:01:15Z) - Last Layer Re-Training is Sufficient for Robustness to Spurious
Correlations [51.552870594221865]
We show that last layer retraining can match or outperform state-of-the-art approaches on spurious correlation benchmarks.
We also show that last layer retraining on large ImageNet-trained models can significantly reduce reliance on background and texture information.
arXiv Detail & Related papers (2022-04-06T16:55:41Z) - Do We Really Need a Learnable Classifier at the End of Deep Neural
Network? [118.18554882199676]
We study the potential of learning a neural network for classification with the classifier randomly as an ETF and fixed during training.
Our experimental results show that our method is able to achieve similar performances on image classification for balanced datasets.
arXiv Detail & Related papers (2022-03-17T04:34:28Z) - The Devil is the Classifier: Investigating Long Tail Relation
Classification with Decoupling Analysis [36.298869931803836]
Long-tailed relation classification is a challenging problem as the head classes may dominate the training phase.
We propose a robust classifier with attentive relation routing, which assigns soft weights by automatically aggregating the relations.
arXiv Detail & Related papers (2020-09-15T12:47:00Z) - Beyond Dropout: Feature Map Distortion to Regularize Deep Neural
Networks [107.77595511218429]
In this paper, we investigate the empirical Rademacher complexity related to intermediate layers of deep neural networks.
We propose a feature distortion method (Disout) for addressing the aforementioned problem.
The superiority of the proposed feature map distortion for producing deep neural network with higher testing performance is analyzed and demonstrated.
arXiv Detail & Related papers (2020-02-23T13:59:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.