Fairness Feedback Loops: Training on Synthetic Data Amplifies Bias
- URL: http://arxiv.org/abs/2403.07857v1
- Date: Tue, 12 Mar 2024 17:48:08 GMT
- Title: Fairness Feedback Loops: Training on Synthetic Data Amplifies Bias
- Authors: Sierra Wyllie, Ilia Shumailov, Nicolas Papernot
- Abstract summary: Model-induced distribution shifts (MIDS) occur as previous model outputs pollute new model training sets over generations of models.
We introduce a framework that allows us to track multiple MIDS over many generations, finding that they can lead to loss in performance, fairness, and minoritized group representation.
Despite these negative consequences, we identify how models might be used for positive, intentional, interventions in their data ecosystems.
- Score: 47.79659355705916
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Model-induced distribution shifts (MIDS) occur as previous model outputs
pollute new model training sets over generations of models. This is known as
model collapse in the case of generative models, and performative prediction or
unfairness feedback loops for supervised models. When a model induces a
distribution shift, it also encodes its mistakes, biases, and unfairnesses into
the ground truth of its data ecosystem. We introduce a framework that allows us
to track multiple MIDS over many generations, finding that they can lead to
loss in performance, fairness, and minoritized group representation, even in
initially unbiased datasets. Despite these negative consequences, we identify
how models might be used for positive, intentional, interventions in their data
ecosystems, providing redress for historical discrimination through a framework
called algorithmic reparation (AR). We simulate AR interventions by curating
representative training batches for stochastic gradient descent to demonstrate
how AR can improve upon the unfairnesses of models and data ecosystems subject
to other MIDS. Our work takes an important step towards identifying,
mitigating, and taking accountability for the unfair feedback loops enabled by
the idea that ML systems are inherently neutral and objective.
Related papers
- Model Integrity when Unlearning with T2I Diffusion Models [11.321968363411145]
We propose approximate Machine Unlearning algorithms to reduce the generation of specific types of images, characterized by samples from a forget distribution''
We then propose unlearning algorithms that demonstrate superior effectiveness in preserving model integrity compared to existing baselines.
arXiv Detail & Related papers (2024-11-04T13:15:28Z) - Low-rank finetuning for LLMs: A fairness perspective [54.13240282850982]
Low-rank approximation techniques have become the de facto standard for fine-tuning Large Language Models.
This paper investigates the effectiveness of these methods in capturing the shift of fine-tuning datasets from the initial pre-trained data distribution.
We show that low-rank fine-tuning inadvertently preserves undesirable biases and toxic behaviors.
arXiv Detail & Related papers (2024-05-28T20:43:53Z) - Improving Fairness and Mitigating MADness in Generative Models [21.024727486615646]
We show that training generative models with intentionally designed hypernetworks leads to models that are more fair when generating datapoints belonging to minority classes.
We introduce a regularization term that penalizes discrepancies between a generative model's estimated weights when trained on real data versus its own synthetic data.
arXiv Detail & Related papers (2024-05-22T20:24:41Z) - Root Causing Prediction Anomalies Using Explainable AI [3.970146574042422]
We present a novel application of explainable AI (XAI) for root-causing performance degradation in machine learning models.
A single feature corruption can cause cascading feature, label and concept drifts.
We have successfully applied this technique to improve the reliability of models used in personalized advertising.
arXiv Detail & Related papers (2024-03-04T19:38:50Z) - Towards Theoretical Understandings of Self-Consuming Generative Models [56.84592466204185]
This paper tackles the emerging challenge of training generative models within a self-consuming loop.
We construct a theoretical framework to rigorously evaluate how this training procedure impacts the data distributions learned by future models.
We present results for kernel density estimation, delivering nuanced insights such as the impact of mixed data training on error propagation.
arXiv Detail & Related papers (2024-02-19T02:08:09Z) - A Probabilistic Fluctuation based Membership Inference Attack for Diffusion Models [32.15773300068426]
Membership Inference Attack (MIA) identifies whether a record exists in a machine learning model's training set by querying the model.
We propose a Probabilistic Fluctuation Assessing Membership Inference Attack (PFAMI)
PFAMI can improve the attack success rate (ASR) by about 27.9% when compared with the best baseline.
arXiv Detail & Related papers (2023-08-23T14:00:58Z) - Non-Invasive Fairness in Learning through the Lens of Data Drift [88.37640805363317]
We show how to improve the fairness of Machine Learning models without altering the data or the learning algorithm.
We use a simple but key insight: the divergence of trends between different populations, and, consecutively, between a learned model and minority populations, is analogous to data drift.
We explore two strategies (model-splitting and reweighing) to resolve this drift, aiming to improve the overall conformance of models to the underlying data.
arXiv Detail & Related papers (2023-03-30T17:30:42Z) - Bias-inducing geometries: an exactly solvable data model with fairness
implications [13.690313475721094]
We introduce an exactly solvable high-dimensional model of data imbalance.
We analytically unpack the typical properties of learning models trained in this synthetic framework.
We obtain exact predictions for the observables that are commonly employed for fairness assessment.
arXiv Detail & Related papers (2022-05-31T16:27:57Z) - FairIF: Boosting Fairness in Deep Learning via Influence Functions with
Validation Set Sensitive Attributes [51.02407217197623]
We propose a two-stage training algorithm named FAIRIF.
It minimizes the loss over the reweighted data set where the sample weights are computed.
We show that FAIRIF yields models with better fairness-utility trade-offs against various types of bias.
arXiv Detail & Related papers (2022-01-15T05:14:48Z) - Contrastive Model Inversion for Data-Free Knowledge Distillation [60.08025054715192]
We propose Contrastive Model Inversion, where the data diversity is explicitly modeled as an optimizable objective.
Our main observation is that, under the constraint of the same amount of data, higher data diversity usually indicates stronger instance discrimination.
Experiments on CIFAR-10, CIFAR-100, and Tiny-ImageNet demonstrate that CMI achieves significantly superior performance when the generated data are used for knowledge distillation.
arXiv Detail & Related papers (2021-05-18T15:13:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.