Conservative AI and social inequality: Conceptualizing alternatives to
bias through social theory
- URL: http://arxiv.org/abs/2007.08666v1
- Date: Thu, 16 Jul 2020 21:52:13 GMT
- Title: Conservative AI and social inequality: Conceptualizing alternatives to
bias through social theory
- Authors: Mike Zajko
- Abstract summary: Societal issues can no longer be out of scope for AI and machine learning, given the impact of these systems on human lives.
Conservatism refers to dominant tendencies that reproduce and strengthen the status quo, while radical approaches work to disrupt systemic forms of inequality.
This requires engagement with a growing body of critical AI scholarship that goes beyond biased data to analyze structured ways of perpetuating inequality.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In response to calls for greater interdisciplinary involvement from the
social sciences and humanities in the development, governance, and study of
artificial intelligence systems, this paper presents one sociologist's view on
the problem of algorithmic bias and the reproduction of societal bias.
Discussions of bias in AI cover much of the same conceptual terrain that
sociologists studying inequality have long understood using more specific terms
and theories. Concerns over reproducing societal bias should be informed by an
understanding of the ways that inequality is continually reproduced in society
-- processes that AI systems are either complicit in, or can be designed to
disrupt and counter. The contrast presented here is between conservative and
radical approaches to AI, with conservatism referring to dominant tendencies
that reproduce and strengthen the status quo, while radical approaches work to
disrupt systemic forms of inequality. The limitations of conservative
approaches to class, gender, and racial bias are discussed as specific
examples, along with the social structures and processes that biases in these
areas are linked to. Societal issues can no longer be out of scope for AI and
machine learning, given the impact of these systems on human lives. This
requires engagement with a growing body of critical AI scholarship that goes
beyond biased data to analyze structured ways of perpetuating inequality,
opening up the possibility for radical alternatives.
Related papers
- Towards Bidirectional Human-AI Alignment: A Systematic Review for Clarifications, Framework, and Future Directions [101.67121669727354]
Recent advancements in AI have highlighted the importance of guiding AI systems towards the intended goals, ethical principles, and values of individuals and groups, a concept broadly recognized as alignment.
The lack of clarified definitions and scopes of human-AI alignment poses a significant obstacle, hampering collaborative efforts across research domains to achieve this alignment.
We introduce a systematic review of over 400 papers published between 2019 and January 2024, spanning multiple domains such as Human-Computer Interaction (HCI), Natural Language Processing (NLP), Machine Learning (ML)
arXiv Detail & Related papers (2024-06-13T16:03:25Z) - Exploiting the Margin: How Capitalism Fuels AI at the Expense of Minoritized Groups [0.0]
This paper explores the relationship between capitalism, racial injustice, and artificial intelligence (AI)
It argues that AI acts as a contemporary vehicle for age-old forms of exploitation.
The paper promotes an approach that integrates social justice and equity into the core of technological design and policy.
arXiv Detail & Related papers (2024-03-10T22:40:07Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Fairness And Bias in Artificial Intelligence: A Brief Survey of Sources,
Impacts, And Mitigation Strategies [11.323961700172175]
This survey paper offers a succinct, comprehensive overview of fairness and bias in AI.
We review sources of bias, such as data, algorithm, and human decision biases.
We assess the societal impact of biased AI systems, focusing on the perpetuation of inequalities and the reinforcement of harmful stereotypes.
arXiv Detail & Related papers (2023-04-16T03:23:55Z) - Factoring the Matrix of Domination: A Critical Review and Reimagination
of Intersectionality in AI Fairness [55.037030060643126]
Intersectionality is a critical framework that allows us to examine how social inequalities persist.
We argue that adopting intersectionality as an analytical framework is pivotal to effectively operationalizing fairness.
arXiv Detail & Related papers (2023-03-16T21:02:09Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Rethinking Fairness: An Interdisciplinary Survey of Critiques of
Hegemonic ML Fairness Approaches [0.0]
This survey article assesses and compares critiques of current fairness-enhancing technical interventions into machine learning (ML)
It draws from a range of non-computing disciplines, including philosophy, feminist studies, critical race and ethnic studies, legal studies, anthropology, and science and technology studies.
The article concludes by imagining future ML fairness research directions that actively disrupt entrenched power dynamics and structural injustices in society.
arXiv Detail & Related papers (2022-05-06T14:27:57Z) - Bias and Discrimination in AI: a cross-disciplinary perspective [5.190307793476366]
We show that finding solutions to bias and discrimination in AI requires robust cross-disciplinary collaborations.
We survey relevant literature about bias and discrimination in AI from an interdisciplinary perspective that embeds technical, legal, social and ethical dimensions.
arXiv Detail & Related papers (2020-08-11T10:02:04Z) - Data, Power and Bias in Artificial Intelligence [5.124256074746721]
Artificial Intelligence has the potential to exacerbate societal bias and set back decades of advances in equal rights and civil liberty.
Data used to train machine learning algorithms may capture social injustices, inequality or discriminatory attitudes that may be learned and perpetuated in society.
This paper reviews ongoing work to ensure data justice, fairness and bias mitigation in AI systems from different domains.
arXiv Detail & Related papers (2020-07-28T16:17:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.