An Empirical Analysis of Racial Categories in the Algorithmic Fairness
Literature
- URL: http://arxiv.org/abs/2309.06607v1
- Date: Tue, 12 Sep 2023 21:23:29 GMT
- Title: An Empirical Analysis of Racial Categories in the Algorithmic Fairness
Literature
- Authors: Amina A. Abdu, Irene V. Pasquetto, Abigail Z. Jacobs
- Abstract summary: We analyze how race is conceptualized and formalized in algorithmic fairness frameworks.
We find that differing notions of race are adopted inconsistently, at times even within a single analysis.
We argue that the construction of racial categories is a value-laden process with significant social and political consequences.
- Score: 2.2713084727838115
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recent work in algorithmic fairness has highlighted the challenge of defining
racial categories for the purposes of anti-discrimination. These challenges are
not new but have previously fallen to the state, which enacts race through
government statistics, policies, and evidentiary standards in
anti-discrimination law. Drawing on the history of state race-making, we
examine how longstanding questions about the nature of race and discrimination
appear within the algorithmic fairness literature. Through a content analysis
of 60 papers published at FAccT between 2018 and 2020, we analyze how race is
conceptualized and formalized in algorithmic fairness frameworks. We note that
differing notions of race are adopted inconsistently, at times even within a
single analysis. We also explore the institutional influences and values
associated with these choices. While we find that categories used in
algorithmic fairness work often echo legal frameworks, we demonstrate that
values from academic computer science play an equally important role in the
construction of racial categories. Finally, we examine the reasoning behind
different operationalizations of race, finding that few papers explicitly
describe their choices and even fewer justify them. We argue that the
construction of racial categories is a value-laden process with significant
social and political consequences for the project of algorithmic fairness. The
widespread lack of justification around the operationalization of race reflects
institutional norms that allow these political decisions to remain obscured
within the backstage of knowledge production.
Related papers
- Auditing for Racial Discrimination in the Delivery of Education Ads [50.37313459134418]
We propose a new third-party auditing method that can evaluate racial bias in the delivery of ads for education opportunities.
We find evidence of racial discrimination in Meta's algorithmic delivery of ads for education opportunities, posing legal and ethical concerns.
arXiv Detail & Related papers (2024-06-02T02:00:55Z) - Algorithmic Fairness: A Tolerance Perspective [31.882207568746168]
This survey delves into the existing literature on algorithmic fairness, specifically highlighting its multifaceted social consequences.
We introduce a novel taxonomy based on 'tolerance', a term we define as the degree to which variations in fairness outcomes are acceptable.
Our systematic review covers diverse industries, revealing critical insights into the balance between algorithmic decision making and social equity.
arXiv Detail & Related papers (2024-04-26T08:16:54Z) - Racial/Ethnic Categories in AI and Algorithmic Fairness: Why They Matter and What They Represent [0.0]
We show how racial categories with unclear assumptions and little justification can lead to varying datasets that poorly represent groups.
We also develop a framework, CIRCSheets, for documenting the choices and assumptions in choosing racial categories and the process of racialization into these categories.
arXiv Detail & Related papers (2024-04-10T04:04:05Z) - Fairness meets Cross-Domain Learning: a new perspective on Models and
Metrics [80.07271410743806]
We study the relationship between cross-domain learning (CD) and model fairness.
We introduce a benchmark on face and medical images spanning several demographic groups as well as classification and localization tasks.
Our study covers 14 CD approaches alongside three state-of-the-art fairness algorithms and shows how the former can outperform the latter.
arXiv Detail & Related papers (2023-03-25T09:34:05Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - AI & Racial Equity: Understanding Sentiment Analysis Artificial
Intelligence, Data Security, and Systemic Theory in Criminal Justice Systems [0.0]
Various forms of implications of artificial intelligence that either exacerbate or decrease racial systemic injustice have been explored.
It has been asserted through the analysis of historical systemic patterns, implicit biases, existing algorithmic risks, and legal implications that natural language processing based AI, such as risk assessment tools, have racially disparate outcomes.
It is concluded that more litigative policies are needed to regulate and restrict how internal government institutions and corporations utilize algorithms, privacy and security risks, and auditing requirements in order to diverge from racially injustice outcomes and practices of the past.
arXiv Detail & Related papers (2022-01-03T19:42:08Z) - Anatomizing Bias in Facial Analysis [86.79402670904338]
Existing facial analysis systems have been shown to yield biased results against certain demographic subgroups.
It has become imperative to ensure that these systems do not discriminate based on gender, identity, or skin tone of individuals.
This has led to research in the identification and mitigation of bias in AI systems.
arXiv Detail & Related papers (2021-12-13T09:51:13Z) - Fairness for Image Generation with Uncertain Sensitive Attributes [97.81354305427871]
This work tackles the issue of fairness in the context of generative procedures, such as image super-resolution.
While traditional group fairness definitions are typically defined with respect to specified protected groups, we emphasize that there are no ground truth identities.
We show that the natural extension of demographic parity is strongly dependent on the grouping, and emphimpossible to achieve obliviously.
arXiv Detail & Related papers (2021-06-23T06:17:17Z) - One Label, One Billion Faces: Usage and Consistency of Racial Categories
in Computer Vision [75.82110684355979]
We study the racial system encoded by computer vision datasets supplying categorical race labels for face images.
We find that each dataset encodes a substantially unique racial system, despite nominally equivalent racial categories.
We find evidence that racial categories encode stereotypes, and exclude ethnic groups from categories on the basis of nonconformity to stereotypes.
arXiv Detail & Related papers (2021-02-03T22:50:04Z) - Characterizing Intersectional Group Fairness with Worst-Case Comparisons [0.0]
We discuss why fairness metrics need to be looked at under the lens of intersectionality.
We suggest a simple worst case comparison method to expand the definitions of existing group fairness metrics.
We conclude with the social, legal and political framework to handle intersectional fairness in the modern context.
arXiv Detail & Related papers (2021-01-05T17:44:33Z) - Affirmative Algorithms: The Legal Grounds for Fairness as Awareness [0.0]
We discuss how such approaches will likely be deemed "algorithmic affirmative action"
We argue that the government-contracting cases offer an alternative grounding for algorithmic fairness.
We call for more research at the intersection of algorithmic fairness and causal inference to ensure that bias mitigation is tailored to specific causes and mechanisms of bias.
arXiv Detail & Related papers (2020-12-18T22:53:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.