Fairness in AI and Its Long-Term Implications on Society
- URL: http://arxiv.org/abs/2304.09826v2
- Date: Wed, 19 Jul 2023 19:30:52 GMT
- Title: Fairness in AI and Its Long-Term Implications on Society
- Authors: Ondrej Bohdal, Timothy Hospedales, Philip H.S. Torr, Fazl Barez
- Abstract summary: We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
- Score: 68.8204255655161
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Successful deployment of artificial intelligence (AI) in various settings has
led to numerous positive outcomes for individuals and society. However, AI
systems have also been shown to harm parts of the population due to biased
predictions. AI fairness focuses on mitigating such biases to ensure AI
decision making is not discriminatory towards certain groups. We take a closer
look at AI fairness and analyze how lack of AI fairness can lead to deepening
of biases over time and act as a social stressor. More specifically, we discuss
how biased models can lead to more negative real-world outcomes for certain
groups, which may then become more prevalent by deploying new AI models trained
on increasingly biased data, resulting in a feedback loop. If the issues
persist, they could be reinforced by interactions with other risks and have
severe implications on society in the form of social unrest. We examine current
strategies for improving AI fairness, assess their limitations in terms of
real-world deployment, and explore potential paths forward to ensure we reap
AI's benefits without causing society's collapse.
Related papers
- Rolling in the deep of cognitive and AI biases [1.556153237434314]
We argue that there is urgent need to understand AI as a sociotechnical system, inseparable from the conditions in which it is designed, developed and deployed.
We address this critical issue by following a radical new methodology under which human cognitive biases become core entities in our AI fairness overview.
We introduce a new mapping, which justifies the humans to AI biases and we detect relevant fairness intensities and inter-dependencies.
arXiv Detail & Related papers (2024-07-30T21:34:04Z) - Societal Adaptation to Advanced AI [1.2607853680700076]
Existing strategies for managing risks from advanced AI systems often focus on affecting what AI systems are developed and how they diffuse.
We urge a complementary approach: increasing societal adaptation to advanced AI.
We introduce a conceptual framework which helps identify adaptive interventions that avoid, defend against and remedy potentially harmful uses of AI systems.
arXiv Detail & Related papers (2024-05-16T17:52:12Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Fairness And Bias in Artificial Intelligence: A Brief Survey of Sources,
Impacts, And Mitigation Strategies [11.323961700172175]
This survey paper offers a succinct, comprehensive overview of fairness and bias in AI.
We review sources of bias, such as data, algorithm, and human decision biases.
We assess the societal impact of biased AI systems, focusing on the perpetuation of inequalities and the reinforcement of harmful stereotypes.
arXiv Detail & Related papers (2023-04-16T03:23:55Z) - Examining the Differential Risk from High-level Artificial Intelligence
and the Question of Control [0.0]
The extent and scope of future AI capabilities remain a key uncertainty.
There are concerns over the extent of integration and oversight of AI opaque decision processes.
This study presents a hierarchical complex systems framework to model AI risk and provide a template for alternative futures analysis.
arXiv Detail & Related papers (2022-11-06T15:46:02Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - The Role of Social Movements, Coalitions, and Workers in Resisting
Harmful Artificial Intelligence and Contributing to the Development of
Responsible AI [0.0]
Coalitions in all sectors are acting worldwide to resist hamful applications of AI.
There are biased, wrongful, and disturbing assumptions embedded in AI algorithms.
Perhaps one of the greatest contributions of AI will be to make us understand how important human wisdom truly is in life on earth.
arXiv Detail & Related papers (2021-07-11T18:51:29Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.