ConceptDrift: Uncovering Biases through the Lens of Foundation Models
- URL: http://arxiv.org/abs/2410.18970v2
- Date: Thu, 21 Nov 2024 18:59:45 GMT
- Title: ConceptDrift: Uncovering Biases through the Lens of Foundation Models
- Authors: Cristian Daniel Păduraru, Antonio Bărbălau, Radu Filipescu, Andrei Liviu Nicolicioiu, Elena Burceanu,
- Abstract summary: We show that ConceptDrift can be scaled to automatically identify biases in datasets without human prior knowledge.
We propose two bias identification evaluation protocols to fill the gap in the previous work and show that our method significantly improves over SoTA methods.
Our method is not bounded to a single modality, and we empirically validate it both on image (Waterbirds, CelebA, ImageNet, and text datasets (CivilComments))
- Score: 5.025665239455297
- License:
- Abstract: An important goal of ML research is to identify and mitigate unwanted biases intrinsic to datasets and already incorporated into pre-trained models. Previous approaches have identified biases using highly curated validation subsets, that require human knowledge to create in the first place. This limits the ability to automate the discovery of unknown biases in new datasets. We solve this by using interpretable vision-language models, combined with a filtration method using LLMs and known concept hierarchies. More exactly, for a dataset, we use pre-trained CLIP models that have an associated embedding for each class and see how it drifts through learning towards embeddings that disclose hidden biases. We call this approach ConceptDrift and show that it can be scaled to automatically identify biases in datasets like ImageNet without human prior knowledge. We propose two bias identification evaluation protocols to fill the gap in the previous work and show that our method significantly improves over SoTA methods, both using our protocol and classical evaluations. Alongside validating the identified biases, we also show that they can be leveraged to improve the performance of different methods. Our method is not bounded to a single modality, and we empirically validate it both on image (Waterbirds, CelebA, ImageNet), and text datasets (CivilComments).
Related papers
- DISCO: DISCovering Overfittings as Causal Rules for Text Classification Models [6.369258625916601]
Post-hoc interpretability methods fail to capture the models' decision-making process fully.
Our paper introduces DISCO, a novel method for discovering global, rule-based explanations.
DISCO supports interactive explanations, enabling human inspectors to distinguish spurious causes in the rule-based output.
arXiv Detail & Related papers (2024-11-07T12:12:44Z) - Spuriousness-Aware Meta-Learning for Learning Robust Classifiers [26.544938760265136]
Spurious correlations are brittle associations between certain attributes of inputs and target variables.
Deep image classifiers often leverage them for predictions, leading to poor generalization on the data where the correlations do not hold.
Mitigating the impact of spurious correlations is crucial towards robust model generalization, but it often requires annotations of the spurious correlations in data.
arXiv Detail & Related papers (2024-06-15T21:41:25Z) - Stubborn Lexical Bias in Data and Models [50.79738900885665]
We use a new statistical method to examine whether spurious patterns in data appear in models trained on the data.
We apply an optimization approach to *reweight* the training data, reducing thousands of spurious correlations.
Surprisingly, though this method can successfully reduce lexical biases in the training data, we still find strong evidence of corresponding bias in the trained models.
arXiv Detail & Related papers (2023-06-03T20:12:27Z) - A Closer Look at Few-shot Classification Again [68.44963578735877]
Few-shot classification consists of a training phase and an adaptation phase.
We empirically prove that the training algorithm and the adaptation algorithm can be completely disentangled.
Our meta-analysis for each phase reveals several interesting insights that may help better understand key aspects of few-shot classification.
arXiv Detail & Related papers (2023-01-28T16:42:05Z) - Influence Tuning: Demoting Spurious Correlations via Instance
Attribution and Instance-Driven Updates [26.527311287924995]
influence tuning can help deconfounding the model from spurious patterns in data.
We show that in a controlled setup, influence tuning can help deconfounding the model from spurious patterns in data.
arXiv Detail & Related papers (2021-10-07T06:59:46Z) - Learning to Model and Ignore Dataset Bias with Mixed Capacity Ensembles [66.15398165275926]
We propose a method that can automatically detect and ignore dataset-specific patterns, which we call dataset biases.
Our method trains a lower capacity model in an ensemble with a higher capacity model.
We show improvement in all settings, including a 10 point gain on the visual question answering dataset.
arXiv Detail & Related papers (2020-11-07T22:20:03Z) - Few-shot Visual Reasoning with Meta-analogical Contrastive Learning [141.2562447971]
We propose to solve a few-shot (or low-shot) visual reasoning problem, by resorting to analogical reasoning.
We extract structural relationships between elements in both domains, and enforce them to be as similar as possible with analogical learning.
We validate our method on RAVEN dataset, on which it outperforms state-of-the-art method, with larger gains when the training data is scarce.
arXiv Detail & Related papers (2020-07-23T14:00:34Z) - Learning Causal Models Online [103.87959747047158]
Predictive models can rely on spurious correlations in the data for making predictions.
One solution for achieving strong generalization is to incorporate causal structures in the models.
We propose an online algorithm that continually detects and removes spurious features.
arXiv Detail & Related papers (2020-06-12T20:49:20Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.