Socially Responsible AI Algorithms: Issues, Purposes, and Challenges
- URL: http://arxiv.org/abs/2101.02032v3
- Date: Thu, 18 Mar 2021 20:12:58 GMT
- Title: Socially Responsible AI Algorithms: Issues, Purposes, and Challenges
- Authors: Lu Cheng, Kush R. Varshney, Huan Liu
- Abstract summary: Technologists and AI researchers have a responsibility to develop trustworthy AI systems.
To build long-lasting trust between AI and human beings, we argue that the key is to think beyond algorithmic fairness.
- Score: 31.382000425295885
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the current era, people and society have grown increasingly reliant on
artificial intelligence (AI) technologies. AI has the potential to drive us
towards a future in which all of humanity flourishes. It also comes with
substantial risks for oppression and calamity. Discussions about whether we
should (re)trust AI have repeatedly emerged in recent years and in many
quarters, including industry, academia, health care, services, and so on.
Technologists and AI researchers have a responsibility to develop trustworthy
AI systems. They have responded with great effort to design more responsible AI
algorithms. However, existing technical solutions are narrow in scope and have
been primarily directed towards algorithms for scoring or classification tasks,
with an emphasis on fairness and unwanted bias. To build long-lasting trust
between AI and human beings, we argue that the key is to think beyond
algorithmic fairness and connect major aspects of AI that potentially cause
AI's indifferent behavior. In this survey, we provide a systematic framework of
Socially Responsible AI Algorithms that aims to examine the subjects of AI
indifference and the need for socially responsible AI algorithms, define the
objectives, and introduce the means by which we may achieve these objectives.
We further discuss how to leverage this framework to improve societal
well-being through protection, information, and prevention/mitigation.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Rolling in the deep of cognitive and AI biases [1.556153237434314]
We argue that there is urgent need to understand AI as a sociotechnical system, inseparable from the conditions in which it is designed, developed and deployed.
We address this critical issue by following a radical new methodology under which human cognitive biases become core entities in our AI fairness overview.
We introduce a new mapping, which justifies the humans to AI biases and we detect relevant fairness intensities and inter-dependencies.
arXiv Detail & Related papers (2024-07-30T21:34:04Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Intent-aligned AI systems deplete human agency: the need for agency
foundations research in AI safety [2.3572498744567127]
We argue that alignment to human intent is insufficient for safe AI systems.
We argue that preservation of long-term agency of humans may be a more robust standard.
arXiv Detail & Related papers (2023-05-30T17:14:01Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Positive AI: Key Challenges in Designing Artificial Intelligence for
Wellbeing [0.5461938536945723]
Many people are increasingly worried about AI's impact on their lives.
To ensure AI progresses beneficially, some researchers have proposed "wellbeing" as a key objective to govern AI.
This article addresses key challenges in designing AI for wellbeing.
arXiv Detail & Related papers (2023-04-12T12:43:00Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Rebuilding Trust: Queer in AI Approach to Artificial Intelligence Risk
Management [0.0]
Trustworthy AI has become an important topic because trust in AI systems and their creators has been lost, or was never present in the first place.
We argue that any AI development, deployment, and monitoring framework that aspires to trust must incorporate both feminist, non-exploitative design principles.
arXiv Detail & Related papers (2021-09-21T21:22:58Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.