Fairness Mediator: Neutralize Stereotype Associations to Mitigate Bias in Large Language Models
- URL: http://arxiv.org/abs/2504.07787v1
- Date: Thu, 10 Apr 2025 14:23:06 GMT
- Title: Fairness Mediator: Neutralize Stereotype Associations to Mitigate Bias in Large Language Models
- Authors: Yisong Xiao, Aishan Liu, Siyuan Liang, Xianglong Liu, Dacheng Tao,
- Abstract summary: LLMs inadvertently absorb spurious correlations from training data, leading to stereotype associations between biased concepts and specific social groups.<n>We propose Fairness Mediator (FairMed), a bias mitigation framework that neutralizes stereotype associations.<n>Our framework comprises two main components: a stereotype association prober and an adversarial debiasing neutralizer.
- Score: 66.5536396328527
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: LLMs have demonstrated remarkable performance across diverse applications, yet they inadvertently absorb spurious correlations from training data, leading to stereotype associations between biased concepts and specific social groups. These associations perpetuate and even amplify harmful social biases, raising significant fairness concerns. To mitigate such biases, prior studies have attempted to project model embeddings into unbiased spaces during inference. However, these approaches have shown limited effectiveness due to their weak alignment with downstream social biases. Inspired by the observation that concept cognition in LLMs is primarily represented through a linear associative memory mechanism, where key-value mapping occurs in the MLP layers, we posited that biased concepts and social groups are similarly encoded as entity (key) and information (value) pairs, which can be manipulated to promote fairer associations. To this end, we propose Fairness Mediator (FairMed), a bias mitigation framework that neutralizes stereotype associations. Our framework comprises two main components: a stereotype association prober and an adversarial debiasing neutralizer. The prober captures stereotype associations encoded within MLP layer activations by employing prompts centered around biased concepts to detect the emission probabilities for social groups. Subsequently, the adversarial debiasing neutralizer intervenes in MLP activations during inference to equalize the association probabilities among different social groups. Extensive experiments across nine protected attributes show that FairMed significantly outperforms SOTA methods in effectiveness. Compared to the most effective baseline, FairMed presents competitive efficiency by cutting mitigation overhead by hundreds of minutes. FairMed also maintains the LLM's language understanding capabilities without compromising overall performance.
Related papers
- A Multi-LLM Debiasing Framework [85.17156744155915]
Large Language Models (LLMs) are powerful tools with the potential to benefit society immensely, yet, they have demonstrated biases that perpetuate societal inequalities.
Recent research has shown a growing interest in multi-LLM approaches, which have been demonstrated to be effective in improving the quality of reasoning.
We propose a novel multi-LLM debiasing framework aimed at reducing bias in LLMs.
arXiv Detail & Related papers (2024-09-20T20:24:50Z) - Identifying and Mitigating Social Bias Knowledge in Language Models [52.52955281662332]
We propose a novel debiasing approach, Fairness Stamp (FAST), which enables fine-grained calibration of individual social biases.
FAST surpasses state-of-the-art baselines with superior debiasing performance.
This highlights the potential of fine-grained debiasing strategies to achieve fairness in large language models.
arXiv Detail & Related papers (2024-08-07T17:14:58Z) - A Group Fairness Lens for Large Language Models [34.0579082699443]
Large language models can perpetuate biases and unfairness when deployed in social media contexts.
We propose evaluating LLM biases from a group fairness lens using a novel hierarchical schema characterizing diverse social groups.
We pioneer a novel chain-of-thought method GF-Think to mitigate biases of LLMs from a group fairness perspective.
arXiv Detail & Related papers (2023-12-24T13:25:15Z) - Confronting LLMs with Traditional ML: Rethinking the Fairness of Large Language Models in Tabular Classifications [23.963586791210414]
We show that large language models (LLMs) tend to inherit social biases from their training data which significantly impact their fairness in classification tasks.
This observation emphasizes that the social biases are inherent within the LLMs themselves and inherited from their pretraining corpus.
arXiv Detail & Related papers (2023-10-23T06:31:28Z) - Bias and Fairness in Large Language Models: A Survey [73.87651986156006]
We present a comprehensive survey of bias evaluation and mitigation techniques for large language models (LLMs)
We first consolidate, formalize, and expand notions of social bias and fairness in natural language processing.
We then unify the literature by proposing three intuitive, two for bias evaluation, and one for mitigation.
arXiv Detail & Related papers (2023-09-02T00:32:55Z) - Group fairness without demographics using social networks [29.073125057536014]
Group fairness is a popular approach to prevent unfavorable treatment of individuals based on sensitive attributes such as race, gender, and disability.
We propose a "group-free" measure of fairness that does not rely on sensitive attributes and, instead, is based on homophily in social networks.
arXiv Detail & Related papers (2023-05-19T00:45:55Z) - Fair Contrastive Learning for Facial Attribute Classification [25.436462696033846]
We propose a new Fair Supervised Contrastive Loss (FSCL) for fair visual representation learning.
In this paper, we for the first time analyze unfairness caused by supervised contrastive learning.
Our method is robust to the intensity of data bias and effectively works in incomplete supervised settings.
arXiv Detail & Related papers (2022-03-30T11:16:18Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Fairness-aware Class Imbalanced Learning [57.45784950421179]
We evaluate long-tail learning methods for tweet sentiment and occupation classification.
We extend a margin-loss based approach with methods to enforce fairness.
arXiv Detail & Related papers (2021-09-21T22:16:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.