The Fallacy of AI Functionality
- URL: http://arxiv.org/abs/2206.09511v1
- Date: Mon, 20 Jun 2022 00:11:48 GMT
- Title: The Fallacy of AI Functionality
- Authors: Inioluwa Deborah Raji, I. Elizabeth Kumar, Aaron Horowitz, Andrew D.
Selbst
- Abstract summary: We analyze a set of case studies to create a taxonomy of known AI functionality issues.
We point to policy and organizational responses that are often overlooked and become more readily available once functionality is drawn into focus.
We argue that functionality is a meaningful AI policy challenge, operating as a necessary first step towards protecting affected communities from algorithmic harm.
- Score: 3.6048794343841766
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deployed AI systems often do not work. They can be constructed haphazardly,
deployed indiscriminately, and promoted deceptively. However, despite this
reality, scholars, the press, and policymakers pay too little attention to
functionality. This leads to technical and policy solutions focused on
"ethical" or value-aligned deployments, often skipping over the prior question
of whether a given system functions, or provides any benefits at all.To
describe the harms of various types of functionality failures, we analyze a set
of case studies to create a taxonomy of known AI functionality issues. We then
point to policy and organizational responses that are often overlooked and
become more readily available once functionality is drawn into focus. We argue
that functionality is a meaningful AI policy challenge, operating as a
necessary first step towards protecting affected communities from algorithmic
harm.
Related papers
- Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Pangu-Agent: A Fine-Tunable Generalist Agent with Structured Reasoning [50.47568731994238]
Key method for creating Artificial Intelligence (AI) agents is Reinforcement Learning (RL)
This paper presents a general framework model for integrating and learning structured reasoning into AI agents' policies.
arXiv Detail & Related papers (2023-12-22T17:57:57Z) - General Purpose Artificial Intelligence Systems (GPAIS): Properties,
Definition, Taxonomy, Societal Implications and Responsible Governance [16.030931070783637]
General-Purpose Artificial Intelligence Systems (GPAIS) has been defined to refer to these AI systems.
To date, the possibility of an Artificial General Intelligence, powerful enough to perform any intellectual task as if it were human, or even improve it, has remained an aspiration, fiction, and considered a risk for our society.
This work discusses existing definitions for GPAIS and proposes a new definition that allows for a gradual differentiation among types of GPAIS according to their properties and limitations.
arXiv Detail & Related papers (2023-07-26T16:35:48Z) - Residual Q-Learning: Offline and Online Policy Customization without
Value [53.47311900133564]
Imitation Learning (IL) is a widely used framework for learning imitative behavior from demonstrations.
We formulate a new problem setting called policy customization.
We propose a novel framework, Residual Q-learning, which can solve the formulated MDP by leveraging the prior policy.
arXiv Detail & Related papers (2023-06-15T22:01:19Z) - Operationalising Responsible AI Using a Pattern-Oriented Approach: A
Case Study on Chatbots in Financial Services [11.33499498841489]
Responsible AI is the practice of developing and using AI systems in a way that benefits the humans, society, and environment.
Various responsible AI principles have been released recently, but those principles are very abstract and not practical enough.
To bridge the gap, we adopt a pattern-oriented approach and build a responsible AI pattern catalogue.
arXiv Detail & Related papers (2023-01-03T23:11:03Z) - Aligning Artificial Intelligence with Humans through Public Policy [0.0]
This essay outlines research on AI that learn structures in policy data that can be leveraged for downstream tasks.
We believe this represents the "comprehension" phase of AI and policy, but leveraging policy as a key source of human values to align AI requires "understanding" policy.
arXiv Detail & Related papers (2022-06-25T21:31:14Z) - On Avoiding Power-Seeking by Artificial Intelligence [93.9264437334683]
We do not know how to align a very intelligent AI agent's behavior with human interests.
I investigate whether we can build smart AI agents which have limited impact on the world, and which do not autonomously seek power.
arXiv Detail & Related papers (2022-06-23T16:56:21Z) - Creative Problem Solving in Artificially Intelligent Agents: A Survey
and Framework [20.51422185398759]
Creative Problem Solving (CPS) is a sub-area within Artificial Intelligence (AI)
We present a definition and a framework of CPS, which we adopt to categorize existing AI methods in this field.
Our framework consists of four main components of a CPS problem, namely, problem formulation, knowledge representation, method of knowledge manipulation, and method of evaluation.
arXiv Detail & Related papers (2022-04-21T18:31:44Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z) - Learning to Be Cautious [71.9871661858886]
A key challenge in the field of reinforcement learning is to develop agents that behave cautiously in novel situations.
We present a sequence of tasks where cautious behavior becomes increasingly non-obvious, as well as an algorithm to demonstrate that it is possible for a system to emphlearn to be cautious.
arXiv Detail & Related papers (2021-10-29T16:52:45Z) - Bias in Data-driven AI Systems -- An Introductory Survey [37.34717604783343]
This survey focuses on data-driven AI, as a large part of AI is powered nowadays by (big) data and powerful Machine Learning (ML) algorithms.
If otherwise not specified, we use the general term bias to describe problems related to the gathering or processing of data that might result in prejudiced decisions on the bases of demographic features like race, sex, etc.
arXiv Detail & Related papers (2020-01-14T09:39:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.