Discover and Mitigate Multiple Biased Subgroups in Image Classifiers
- URL: http://arxiv.org/abs/2403.12777v2
- Date: Wed, 20 Mar 2024 19:18:27 GMT
- Title: Discover and Mitigate Multiple Biased Subgroups in Image Classifiers
- Authors: Zeliang Zhang, Mingqian Feng, Zhiheng Li, Chenliang Xu,
- Abstract summary: Machine learning models can perform well on in-distribution data but often fail on biased subgroups that are underrepresented in the training data.
We propose Decomposition, Interpretation, and Mitigation (DIM) to address this problem.
Our approach decomposes the image features into multiple components that represent multiple subgroups.
- Score: 45.96784278814168
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning models can perform well on in-distribution data but often fail on biased subgroups that are underrepresented in the training data, hindering the robustness of models for reliable applications. Such subgroups are typically unknown due to the absence of subgroup labels. Discovering biased subgroups is the key to understanding models' failure modes and further improving models' robustness. Most previous works of subgroup discovery make an implicit assumption that models only underperform on a single biased subgroup, which does not hold on in-the-wild data where multiple biased subgroups exist. In this work, we propose Decomposition, Interpretation, and Mitigation (DIM), a novel method to address a more challenging but also more practical problem of discovering multiple biased subgroups in image classifiers. Our approach decomposes the image features into multiple components that represent multiple subgroups. This decomposition is achieved via a bilinear dimension reduction method, Partial Least Square (PLS), guided by useful supervision from the image classifier. We further interpret the semantic meaning of each subgroup component by generating natural language descriptions using vision-language foundation models. Finally, DIM mitigates multiple biased subgroups simultaneously via two strategies, including the data- and model-centric strategies. Extensive experiments on CIFAR-100 and Breeds datasets demonstrate the effectiveness of DIM in discovering and mitigating multiple biased subgroups. Furthermore, DIM uncovers the failure modes of the classifier on Hard ImageNet, showcasing its broader applicability to understanding model bias in image classifiers. The code is available at https://github.com/ZhangAIPI/DIM.
Related papers
- Data Debiasing with Datamodels (D3M): Improving Subgroup Robustness via Data Selection [80.85902083005237]
We introduce Data Debiasing with Datamodels (D3M), a debiasing approach which isolates and removes specific training examples that drive the model's failures on minority groups.
arXiv Detail & Related papers (2024-06-24T17:51:01Z) - Identification of Systematic Errors of Image Classifiers on Rare
Subgroups [12.064692111429494]
systematic errors can impact both fairness for demographic minority groups as well as robustness and safety under domain shift.
We leverage recent advances in text-to-image models and search in the space of textual descriptions of subgroups ("prompts") for subgroups where the target model has low performance.
We study subgroup coverage and identifiability with PromptAttack in a controlled setting and find that it identifies systematic errors with high accuracy.
arXiv Detail & Related papers (2023-03-09T07:08:25Z) - Mutual Exclusive Modulator for Long-Tailed Recognition [12.706961256329572]
Long-tailed recognition is the task of learning high-performance classifiers given extremely imbalanced training samples between categories.
We introduce a mutual exclusive modulator which can estimate the probability of an image belonging to each group.
Our method achieves competitive performance compared to the state-of-the-art benchmarks.
arXiv Detail & Related papers (2023-02-19T07:31:49Z) - Outlier-Robust Group Inference via Gradient Space Clustering [50.87474101594732]
Existing methods can improve the worst-group performance, but they require group annotations, which are often expensive and sometimes infeasible to obtain.
We address the problem of learning group annotations in the presence of outliers by clustering the data in the space of gradients of the model parameters.
We show that data in the gradient space has a simpler structure while preserving information about minority groups and outliers, making it suitable for standard clustering methods like DBSCAN.
arXiv Detail & Related papers (2022-10-13T06:04:43Z) - Examining and Combating Spurious Features under Distribution Shift [94.31956965507085]
We define and analyze robust and spurious representations using the information-theoretic concept of minimal sufficient statistics.
We prove that even when there is only bias of the input distribution, models can still pick up spurious features from their training data.
Inspired by our analysis, we demonstrate that group DRO can fail when groups do not directly account for various spurious correlations.
arXiv Detail & Related papers (2021-06-14T05:39:09Z) - Robust subgroup discovery [0.2578242050187029]
We formalize the problem of optimal robust subgroup discovery using the Minimum Description Length principle.
We propose RSD, a greedy greedy that finds good subgroup lists and guarantees that the most significant subgroup is added in each iteration.
We empirically show on 54 datasets that RSD outperforms previous subgroup set discovery methods in terms of quality and subgroup list size.
arXiv Detail & Related papers (2021-03-25T09:04:13Z) - LOGAN: Local Group Bias Detection by Clustering [86.38331353310114]
We argue that evaluating bias at the corpus level is not enough for understanding how biases are embedded in a model.
We propose LOGAN, a new bias detection technique based on clustering.
Experiments on toxicity classification and object classification tasks show that LOGAN identifies bias in a local region.
arXiv Detail & Related papers (2020-10-06T16:42:51Z) - Model Patching: Closing the Subgroup Performance Gap with Data
Augmentation [50.35010342284508]
We introduce model patching, a framework for improving robustness of machine learning models.
Model patching encourages the model to be invariant to subgroup differences, and focus on class information shared by subgroups.
We instantiate model patching with CAMEL, which (1) uses a CycleGAN to learn the intra-class, inter-subgroup augmentations, and (2) balances subgroup performance using a theoretically-motivated consistency regularizer.
We demonstrate CAMEL's effectiveness on 3 benchmark datasets, with reductions in robust error up to 33% relative to the best baseline.
arXiv Detail & Related papers (2020-08-15T20:01:23Z) - Discovering outstanding subgroup lists for numeric targets using MDL [0.34410212782758054]
We propose an algorithm for subgroup set discovery that is based on the minimum description length (MDL) principle and subgroup lists.
We show that our formalization coincides with an existing quality measure when finding a single subgroup.
We next propose SSD++, an algorithm for which we empirically demonstrate that it returns outstanding subgroup lists.
arXiv Detail & Related papers (2020-06-16T14:29:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.