Common-Sense Bias Discovery and Mitigation for Classification Tasks
- URL: http://arxiv.org/abs/2401.13213v2
- Date: Thu, 8 Feb 2024 05:38:54 GMT
- Title: Common-Sense Bias Discovery and Mitigation for Classification Tasks
- Authors: Miao Zhang, Zee fryer, Ben Colman, Ali Shahriyari, Gaurav Bharaj
- Abstract summary: We propose a framework to extract feature clusters in a dataset based on image descriptions.
The analyzed features and correlations are human-interpretable, so we name the method Common-Sense Bias Discovery (CSBD)
Experiments show that our method discovers novel biases on multiple classification tasks for two benchmark image datasets.
- Score: 16.8259488742528
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning model bias can arise from dataset composition: sensitive
features correlated to the learning target disturb the model decision rule and
lead to performance differences along the features. Existing de-biasing work
captures prominent and delicate image features which are traceable in model
latent space, like colors of digits or background of animals. However, using
the latent space is not sufficient to understand all dataset feature
correlations. In this work, we propose a framework to extract feature clusters
in a dataset based on image descriptions, allowing us to capture both subtle
and coarse features of the images. The feature co-occurrence pattern is
formulated and correlation is measured, utilizing a human-in-the-loop for
examination. The analyzed features and correlations are human-interpretable, so
we name the method Common-Sense Bias Discovery (CSBD). Having exposed sensitive
correlations in a dataset, we demonstrate that downstream model bias can be
mitigated by adjusting image sampling weights, without requiring a sensitive
group label supervision. Experiments show that our method discovers novel
biases on multiple classification tasks for two benchmark image datasets, and
the intervention outperforms state-of-the-art unsupervised bias mitigation
methods.
Related papers
- Downstream-Pretext Domain Knowledge Traceback for Active Learning [138.02530777915362]
We propose a downstream-pretext domain knowledge traceback (DOKT) method that traces the data interactions of downstream knowledge and pre-training guidance.
DOKT consists of a traceback diversity indicator and a domain-based uncertainty estimator.
Experiments conducted on ten datasets show that our model outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2024-07-20T01:34:13Z) - Enhancing Intrinsic Features for Debiasing via Investigating Class-Discerning Common Attributes in Bias-Contrastive Pair [36.221761997349795]
Deep neural networks rely on bias attributes that are spuriously correlated with a target class in the presence of dataset bias.
This paper proposes a method that provides the model with explicit spatial guidance that indicates the region of intrinsic features.
Experiments demonstrate that our method achieves state-of-the-art performance on synthetic and real-world datasets with various levels of bias severity.
arXiv Detail & Related papers (2024-04-30T04:13:14Z) - Debiasing Counterfactuals In the Presence of Spurious Correlations [0.98342301244574]
We introduce the first end-to-end training framework that integrates both (i) popular debiasing classifiers and (ii) counterfactual image generation.
We demonstrate that the debiasing method: learns generalizable markers across the population, and (ii) successfully ignores spurious correlations and focuses on the underlying disease pathology.
arXiv Detail & Related papers (2023-08-21T19:01:45Z) - Stubborn Lexical Bias in Data and Models [50.79738900885665]
We use a new statistical method to examine whether spurious patterns in data appear in models trained on the data.
We apply an optimization approach to *reweight* the training data, reducing thousands of spurious correlations.
Surprisingly, though this method can successfully reduce lexical biases in the training data, we still find strong evidence of corresponding bias in the trained models.
arXiv Detail & Related papers (2023-06-03T20:12:27Z) - DASH: Visual Analytics for Debiasing Image Classification via
User-Driven Synthetic Data Augmentation [27.780618650580923]
Image classification models often learn to predict a class based on irrelevant co-occurrences between input features and an output class in training data.
We call the unwanted correlations "data biases," and the visual features causing data biases "bias factors"
It is challenging to identify and mitigate biases automatically without human intervention.
arXiv Detail & Related papers (2022-09-14T00:44:41Z) - General Greedy De-bias Learning [163.65789778416172]
We propose a General Greedy De-bias learning framework (GGD), which greedily trains the biased models and the base model like gradient descent in functional space.
GGD can learn a more robust base model under the settings of both task-specific biased models with prior knowledge and self-ensemble biased model without prior knowledge.
arXiv Detail & Related papers (2021-12-20T14:47:32Z) - Learning to Model and Ignore Dataset Bias with Mixed Capacity Ensembles [66.15398165275926]
We propose a method that can automatically detect and ignore dataset-specific patterns, which we call dataset biases.
Our method trains a lower capacity model in an ensemble with a higher capacity model.
We show improvement in all settings, including a 10 point gain on the visual question answering dataset.
arXiv Detail & Related papers (2020-11-07T22:20:03Z) - Out-of-distribution Generalization via Partial Feature Decorrelation [72.96261704851683]
We present a novel Partial Feature Decorrelation Learning (PFDL) algorithm, which jointly optimize a feature decomposition network and the target image classification model.
The experiments on real-world datasets demonstrate that our method can improve the backbone model's accuracy on OOD image classification datasets.
arXiv Detail & Related papers (2020-07-30T05:48:48Z) - High-Order Information Matters: Learning Relation and Topology for
Occluded Person Re-Identification [84.43394420267794]
We propose a novel framework by learning high-order relation and topology information for discriminative features and robust alignment.
Our framework significantly outperforms state-of-the-art by6.5%mAP scores on Occluded-Duke dataset.
arXiv Detail & Related papers (2020-03-18T12:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.