AI & Racial Equity: Understanding Sentiment Analysis Artificial
Intelligence, Data Security, and Systemic Theory in Criminal Justice Systems
- URL: http://arxiv.org/abs/2201.00855v1
- Date: Mon, 3 Jan 2022 19:42:08 GMT
- Title: AI & Racial Equity: Understanding Sentiment Analysis Artificial
Intelligence, Data Security, and Systemic Theory in Criminal Justice Systems
- Authors: Alia Abbas
- Abstract summary: Various forms of implications of artificial intelligence that either exacerbate or decrease racial systemic injustice have been explored.
It has been asserted through the analysis of historical systemic patterns, implicit biases, existing algorithmic risks, and legal implications that natural language processing based AI, such as risk assessment tools, have racially disparate outcomes.
It is concluded that more litigative policies are needed to regulate and restrict how internal government institutions and corporations utilize algorithms, privacy and security risks, and auditing requirements in order to diverge from racially injustice outcomes and practices of the past.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Various forms of implications of artificial intelligence that either
exacerbate or decrease racial systemic injustice have been explored in this
applied research endeavor. Taking each thematic area of identifying, analyzing,
and debating an systemic issue have been leveraged in investigating merits and
drawbacks of using algorithms to automate human decision making in racially
sensitive environments. It has been asserted through the analysis of historical
systemic patterns, implicit biases, existing algorithmic risks, and legal
implications that natural language processing based AI, such as risk assessment
tools, have racially disparate outcomes. It is concluded that more litigative
policies are needed to regulate and restrict how internal government
institutions and corporations utilize algorithms, privacy and security risks,
and auditing requirements in order to diverge from racially injustice outcomes
and practices of the past.
Related papers
- An evidence-based methodology for human rights impact assessment (HRIA) in the development of AI data-intensive systems [49.1574468325115]
We show that human rights already underpin the decisions in the field of data use.
This work presents a methodology and a model for a Human Rights Impact Assessment (HRIA)
The proposed methodology is tested in concrete case-studies to prove its feasibility and effectiveness.
arXiv Detail & Related papers (2024-07-30T16:27:52Z) - Exploiting the Margin: How Capitalism Fuels AI at the Expense of Minoritized Groups [0.0]
This paper explores the relationship between capitalism, racial injustice, and artificial intelligence (AI)
It argues that AI acts as a contemporary vehicle for age-old forms of exploitation.
The paper promotes an approach that integrates social justice and equity into the core of technological design and policy.
arXiv Detail & Related papers (2024-03-10T22:40:07Z) - An Empirical Analysis of Racial Categories in the Algorithmic Fairness
Literature [2.2713084727838115]
We analyze how race is conceptualized and formalized in algorithmic fairness frameworks.
We find that differing notions of race are adopted inconsistently, at times even within a single analysis.
We argue that the construction of racial categories is a value-laden process with significant social and political consequences.
arXiv Detail & Related papers (2023-09-12T21:23:29Z) - Bias, diversity, and challenges to fairness in classification and
automated text analysis. From libraries to AI and back [3.9198548406564604]
We investigate the risks surrounding bias and unfairness in AI usage in classification and automated text analysis.
We take a closer look at the notion of '(un)fairness' in relation to the notion of 'diversity'
arXiv Detail & Related papers (2023-03-07T20:54:49Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Tackling Algorithmic Disability Discrimination in the Hiring Process: An
Ethical, Legal and Technical Analysis [2.294014185517203]
We discuss concerns and opportunities raised by AI-driven hiring in relation to disability discrimination.
We establish some starting points and design a roadmap for ethicists, lawmakers, advocates as well as AI practitioners alike.
arXiv Detail & Related papers (2022-06-13T13:32:37Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Fairness Deconstructed: A Sociotechnical View of 'Fair' Algorithms in
Criminal Justice [0.0]
Machine learning researchers have developed methods for fairness, many of which rely on equalizing empirical metrics across protected attributes.
I argue that much of the fair ML fails to account for fairness issues with underlying crime data.
Instead of building AI that reifies power imbalances, I ask whether data science can be used to understand the root causes of structural marginalization.
arXiv Detail & Related papers (2021-06-25T06:52:49Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [73.85525896663371]
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
arXiv Detail & Related papers (2020-04-15T15:58:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.