Bias, diversity, and challenges to fairness in classification and
automated text analysis. From libraries to AI and back
- URL: http://arxiv.org/abs/2303.07207v1
- Date: Tue, 7 Mar 2023 20:54:49 GMT
- Title: Bias, diversity, and challenges to fairness in classification and
automated text analysis. From libraries to AI and back
- Authors: Bettina Berendt, \"Ozg\"ur Karadeniz, Sercan K{\i}yak, Stefan Mertens,
Leen d'Haenens
- Abstract summary: We investigate the risks surrounding bias and unfairness in AI usage in classification and automated text analysis.
We take a closer look at the notion of '(un)fairness' in relation to the notion of 'diversity'
- Score: 3.9198548406564604
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Libraries are increasingly relying on computational methods, including
methods from Artificial Intelligence (AI). This increasing usage raises
concerns about the risks of AI that are currently broadly discussed in
scientific literature, the media and law-making. In this article we investigate
the risks surrounding bias and unfairness in AI usage in classification and
automated text analysis within the context of library applications. We describe
examples that show how the library community has been aware of such risks for a
long time, and how it has developed and deployed countermeasures. We take a
closer look at the notion of '(un)fairness' in relation to the notion of
'diversity', and we investigate a formalisation of diversity that models both
inclusion and distribution. We argue that many of the unfairness problems of
automated content analysis can also be regarded through the lens of diversity
and the countermeasures taken to enhance diversity.
Related papers
- Quantifying the Cross-sectoral Intersecting Discrepancies within Multiple Groups Using Latent Class Analysis Towards Fairness [6.683051393349788]
This research introduces an innovative approach to quantify cross-sectoral intersecting discrepancies.
We validate our approach using both proprietary and public datasets.
Our findings reveal significant discrepancies between minority ethnic groups, highlighting the need for targeted interventions in real-world AI applications.
arXiv Detail & Related papers (2024-05-24T08:10:31Z) - Bias and Fairness in Large Language Models: A Survey [73.87651986156006]
We present a comprehensive survey of bias evaluation and mitigation techniques for large language models (LLMs)
We first consolidate, formalize, and expand notions of social bias and fairness in natural language processing.
We then unify the literature by proposing three intuitive, two for bias evaluation, and one for mitigation.
arXiv Detail & Related papers (2023-09-02T00:32:55Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Fairness And Bias in Artificial Intelligence: A Brief Survey of Sources,
Impacts, And Mitigation Strategies [11.323961700172175]
This survey paper offers a succinct, comprehensive overview of fairness and bias in AI.
We review sources of bias, such as data, algorithm, and human decision biases.
We assess the societal impact of biased AI systems, focusing on the perpetuation of inequalities and the reinforcement of harmful stereotypes.
arXiv Detail & Related papers (2023-04-16T03:23:55Z) - A toolkit of dilemmas: Beyond debiasing and fairness formulas for
responsible AI/ML [0.0]
Approaches to fair and ethical AI have recently fallen under the scrutiny of the emerging field of critical data studies.
This paper advocates for a situated reasoning and creative engagement with the dilemmas surrounding responsible algorithmic/data-driven systems.
arXiv Detail & Related papers (2023-03-03T13:58:24Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - AI & Racial Equity: Understanding Sentiment Analysis Artificial
Intelligence, Data Security, and Systemic Theory in Criminal Justice Systems [0.0]
Various forms of implications of artificial intelligence that either exacerbate or decrease racial systemic injustice have been explored.
It has been asserted through the analysis of historical systemic patterns, implicit biases, existing algorithmic risks, and legal implications that natural language processing based AI, such as risk assessment tools, have racially disparate outcomes.
It is concluded that more litigative policies are needed to regulate and restrict how internal government institutions and corporations utilize algorithms, privacy and security risks, and auditing requirements in order to diverge from racially injustice outcomes and practices of the past.
arXiv Detail & Related papers (2022-01-03T19:42:08Z) - FairCVtest Demo: Understanding Bias in Multimodal Learning with a
Testbed in Fair Automatic Recruitment [79.23531577235887]
This demo shows the capacity of the Artificial Intelligence (AI) behind a recruitment tool to extract sensitive information from unstructured data.
Aditionally, the demo includes a new algorithm for discrimination-aware learning which eliminates sensitive information in our multimodal AI framework.
arXiv Detail & Related papers (2020-09-12T17:45:09Z) - Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [73.85525896663371]
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
arXiv Detail & Related papers (2020-04-15T15:58:05Z) - No computation without representation: Avoiding data and algorithm
biases through diversity [11.12971845021808]
We draw connections between the lack of diversity within academic and professional computing fields and the type and breadth of the biases encountered in datasets.
We use these lessons to develop recommendations that provide concrete steps for the computing community to increase diversity.
arXiv Detail & Related papers (2020-02-26T23:07:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.