Social Norm Bias: Residual Harms of Fairness-Aware Algorithms
- URL: http://arxiv.org/abs/2108.11056v1
- Date: Wed, 25 Aug 2021 05:54:56 GMT
- Title: Social Norm Bias: Residual Harms of Fairness-Aware Algorithms
- Authors: Myra Cheng, Maria De-Arteaga, Lester Mackey, Adam Tauman Kalai
- Abstract summary: Social Norm Bias (SNoB) is a subtle but consequential type of discrimination that may be exhibited by automated decision-making systems.
We quantify SNoB by measuring how an algorithm's predictions are associated with conformity to gender norms.
We show that post-processing interventions do not mitigate this type of bias at all.
- Score: 21.50551404445654
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many modern learning algorithms mitigate bias by enforcing fairness across
coarsely-defined groups related to a sensitive attribute like gender or race.
However, the same algorithms seldom account for the within-group biases that
arise due to the heterogeneity of group members. In this work, we characterize
Social Norm Bias (SNoB), a subtle but consequential type of discrimination that
may be exhibited by automated decision-making systems, even when these systems
achieve group fairness objectives. We study this issue through the lens of
gender bias in occupation classification from biographies. We quantify SNoB by
measuring how an algorithm's predictions are associated with conformity to
gender norms, which is measured using a machine learning approach. This
framework reveals that for classification tasks related to male-dominated
occupations, fairness-aware classifiers favor biographies written in ways that
align with masculine gender norms. We compare SNoB across fairness intervention
techniques and show that post-processing interventions do not mitigate this
type of bias at all.
Related papers
- The Root Shapes the Fruit: On the Persistence of Gender-Exclusive Harms in Aligned Language Models [58.130894823145205]
We center transgender, nonbinary, and other gender-diverse identities to investigate how alignment procedures interact with pre-existing gender-diverse bias.
Our findings reveal that DPO-aligned models are particularly sensitive to supervised finetuning.
We conclude with recommendations tailored to DPO and broader alignment practices.
arXiv Detail & Related papers (2024-11-06T06:50:50Z) - GenderBias-\emph{VL}: Benchmarking Gender Bias in Vision Language Models via Counterfactual Probing [72.0343083866144]
This paper introduces the GenderBias-emphVL benchmark to evaluate occupation-related gender bias in Large Vision-Language Models.
Using our benchmark, we extensively evaluate 15 commonly used open-source LVLMs and state-of-the-art commercial APIs.
Our findings reveal widespread gender biases in existing LVLMs.
arXiv Detail & Related papers (2024-06-30T05:55:15Z) - Deep Generative Views to Mitigate Gender Classification Bias Across
Gender-Race Groups [0.8594140167290097]
We propose a bias mitigation strategy to improve classification accuracy and reduce bias across gender-racial groups.
We leverage the power of generative views, structured learning, and evidential learning towards mitigating gender classification bias.
arXiv Detail & Related papers (2022-08-17T16:23:35Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Understanding Unfairness in Fraud Detection through Model and Data Bias
Interactions [4.159343412286401]
We argue that algorithmic unfairness stems from interactions between models and biases in the data.
We study a set of hypotheses regarding the fairness-accuracy trade-offs that fairness-blind ML algorithms exhibit under different data bias settings.
arXiv Detail & Related papers (2022-07-13T15:18:30Z) - Towards A Holistic View of Bias in Machine Learning: Bridging
Algorithmic Fairness and Imbalanced Learning [8.602734307457387]
A key element in achieving algorithmic fairness with respect to protected groups is the simultaneous reduction of class and protected group imbalance in the underlying training data.
We propose a novel oversampling algorithm, Fair Oversampling, that addresses both skewed class distributions and protected features.
arXiv Detail & Related papers (2022-07-13T09:48:52Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - Anatomizing Bias in Facial Analysis [86.79402670904338]
Existing facial analysis systems have been shown to yield biased results against certain demographic subgroups.
It has become imperative to ensure that these systems do not discriminate based on gender, identity, or skin tone of individuals.
This has led to research in the identification and mitigation of bias in AI systems.
arXiv Detail & Related papers (2021-12-13T09:51:13Z) - Characterizing Intersectional Group Fairness with Worst-Case Comparisons [0.0]
We discuss why fairness metrics need to be looked at under the lens of intersectionality.
We suggest a simple worst case comparison method to expand the definitions of existing group fairness metrics.
We conclude with the social, legal and political framework to handle intersectional fairness in the modern context.
arXiv Detail & Related papers (2021-01-05T17:44:33Z) - One-vs.-One Mitigation of Intersectional Bias: A General Method to
Extend Fairness-Aware Binary Classification [0.48733623015338234]
One-vs.-One Mitigation is a process of comparison between each pair of subgroups related to sensitive attributes to the fairness-aware machine learning for binary classification.
Our method mitigates the intersectional bias much better than conventional methods in all the settings.
arXiv Detail & Related papers (2020-10-26T11:35:39Z) - Gender Stereotype Reinforcement: Measuring the Gender Bias Conveyed by
Ranking Algorithms [68.85295025020942]
We propose the Gender Stereotype Reinforcement (GSR) measure, which quantifies the tendency of a Search Engines to support gender stereotypes.
GSR is the first specifically tailored measure for Information Retrieval, capable of quantifying representational harms.
arXiv Detail & Related papers (2020-09-02T20:45:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.