Towards Last-layer Retraining for Group Robustness with Fewer
Annotations
- URL: http://arxiv.org/abs/2309.08534v3
- Date: Wed, 15 Nov 2023 04:18:39 GMT
- Title: Towards Last-layer Retraining for Group Robustness with Fewer
Annotations
- Authors: Tyler LaBonte, Vidya Muthukumar, Abhishek Kumar
- Abstract summary: Empirical risk minimization (ERM) of neural networks is prone to over-reliance on spurious correlations.
Recent deep feature reweighting technique achieves state-of-the-art group robustness via simple last-layer retraining.
We show that last-layer retraining can greatly improve worst-group accuracy even when the reweighting dataset has only a small proportion of worst-group data.
- Score: 11.650659637480112
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Empirical risk minimization (ERM) of neural networks is prone to
over-reliance on spurious correlations and poor generalization on minority
groups. The recent deep feature reweighting (DFR) technique achieves
state-of-the-art group robustness via simple last-layer retraining, but it
requires held-out group and class annotations to construct a group-balanced
reweighting dataset. In this work, we examine this impractical requirement and
find that last-layer retraining can be surprisingly effective with no group
annotations (other than for model selection) and only a handful of class
annotations. We first show that last-layer retraining can greatly improve
worst-group accuracy even when the reweighting dataset has only a small
proportion of worst-group data. This implies a "free lunch" where holding out a
subset of training data to retrain the last layer can substantially outperform
ERM on the entire dataset with no additional data or annotations. To further
improve group robustness, we introduce a lightweight method called selective
last-layer finetuning (SELF), which constructs the reweighting dataset using
misclassifications or disagreements. Our empirical and theoretical results
present the first evidence that model disagreement upsamples worst-group data,
enabling SELF to nearly match DFR on four well-established benchmarks across
vision and language tasks with no group annotations and less than 3% of the
held-out class annotations. Our code is available at
https://github.com/tmlabonte/last-layer-retraining.
Related papers
- Trained Models Tell Us How to Make Them Robust to Spurious Correlation without Group Annotation [3.894771553698554]
Empirical Risk Minimization (ERM) models tend to rely on attributes that have high spurious correlation with the target.
This can degrade the performance on underrepresented (or'minority') groups that lack these attributes.
We propose Environment-based Validation and Loss-based Sampling (EVaLS) to enhance robustness to spurious correlation.
arXiv Detail & Related papers (2024-10-07T08:17:44Z) - Data Debiasing with Datamodels (D3M): Improving Subgroup Robustness via Data Selection [80.85902083005237]
We introduce Data Debiasing with Datamodels (D3M), a debiasing approach which isolates and removes specific training examples that drive the model's failures on minority groups.
arXiv Detail & Related papers (2024-06-24T17:51:01Z) - Annotation-Free Group Robustness via Loss-Based Resampling [3.355491272942994]
Training neural networks for image classification with empirical risk minimization makes them vulnerable to relying on spurious attributes instead of causal ones for prediction.
We propose a new method, called loss-based feature re-weighting (LFR), in which we infer a grouping of the data by evaluating an ERM-pre-trained model on a small left-out split of the training data.
For a complete assessment, we evaluate LFR on various versions of Waterbirds and CelebA datasets with different spurious correlations.
arXiv Detail & Related papers (2023-12-08T08:22:02Z) - Ranking & Reweighting Improves Group Distributional Robustness [14.021069321266516]
We propose a ranking-based training method called Discounted Rank Upweighting (DRU) to learn models that exhibit strong OOD performance on the test data.
Results on several synthetic and real-world datasets highlight the superior ability of our group-ranking-based (akin to soft-minimax) approach in selecting and learning models that are robust to group distributional shifts.
arXiv Detail & Related papers (2023-05-09T20:37:16Z) - Bitrate-Constrained DRO: Beyond Worst Case Robustness To Unknown Group
Shifts [122.08782633878788]
Some robust training algorithms (e.g., Group DRO) specialize to group shifts and require group information on all training points.
Other methods (e.g., CVaR DRO) that do not need group annotations can be overly conservative.
We learn a model that maintains high accuracy on simple group functions realized by low features.
arXiv Detail & Related papers (2023-02-06T17:07:16Z) - Outlier-Robust Group Inference via Gradient Space Clustering [50.87474101594732]
Existing methods can improve the worst-group performance, but they require group annotations, which are often expensive and sometimes infeasible to obtain.
We address the problem of learning group annotations in the presence of outliers by clustering the data in the space of gradients of the model parameters.
We show that data in the gradient space has a simpler structure while preserving information about minority groups and outliers, making it suitable for standard clustering methods like DBSCAN.
arXiv Detail & Related papers (2022-10-13T06:04:43Z) - Take One Gram of Neural Features, Get Enhanced Group Robustness [23.541213868620837]
Predictive performance of machine learning models trained with empirical risk minimization can degrade considerably under distribution shifts.
We propose to partition the training dataset into groups based on Gram matrices of features extracted by an identification'' model.
Our approach not only improves group robustness over ERM but also outperforms all recent baselines.
arXiv Detail & Related papers (2022-08-26T12:34:55Z) - Towards Group Robustness in the presence of Partial Group Labels [61.33713547766866]
spurious correlations between input samples and the target labels wrongly direct the neural network predictions.
We propose an algorithm that optimize for the worst-off group assignments from a constraint set.
We show improvements in the minority group's performance while preserving overall aggregate accuracy across groups.
arXiv Detail & Related papers (2022-01-10T22:04:48Z) - Just Train Twice: Improving Group Robustness without Training Group
Information [101.84574184298006]
Standard training via empirical risk minimization can produce models that achieve high accuracy on average but low accuracy on certain groups.
Prior approaches that achieve high worst-group accuracy, like group distributionally robust optimization (group DRO) require expensive group annotations for each training point.
We propose a simple two-stage approach, JTT, that first trains a standard ERM model for several epochs, and then trains a second model that upweights the training examples that the first model misclassified.
arXiv Detail & Related papers (2021-07-19T17:52:32Z) - Examining and Combating Spurious Features under Distribution Shift [94.31956965507085]
We define and analyze robust and spurious representations using the information-theoretic concept of minimal sufficient statistics.
We prove that even when there is only bias of the input distribution, models can still pick up spurious features from their training data.
Inspired by our analysis, we demonstrate that group DRO can fail when groups do not directly account for various spurious correlations.
arXiv Detail & Related papers (2021-06-14T05:39:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.