Harms from Increasingly Agentic Algorithmic Systems
- URL: http://arxiv.org/abs/2302.10329v2
- Date: Fri, 12 May 2023 02:49:22 GMT
- Title: Harms from Increasingly Agentic Algorithmic Systems
- Authors: Alan Chan, Rebecca Salganik, Alva Markelius, Chris Pang, Nitarshan
Rajkumar, Dmitrii Krasheninnikov, Lauro Langosco, Zhonghao He, Yawen Duan,
Micah Carroll, Michelle Lin, Alex Mayhew, Katherine Collins, Maryam
Molamohammadi, John Burden, Wanru Zhao, Shalaleh Rismani, Konstantinos
Voudouris, Umang Bhatt, Adrian Weller, David Krueger, Tegan Maharaj
- Abstract summary: Research in Fairness, Accountability, Transparency, and Ethics (FATE) has established many sources and forms of algorithmic harm.
Despite ongoing harms, new systems are being developed and deployed which threaten the perpetuation of the same harms.
- Score: 21.613581713046464
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Research in Fairness, Accountability, Transparency, and Ethics (FATE) has
established many sources and forms of algorithmic harm, in domains as diverse
as health care, finance, policing, and recommendations. Much work remains to be
done to mitigate the serious harms of these systems, particularly those
disproportionately affecting marginalized communities. Despite these ongoing
harms, new systems are being developed and deployed which threaten the
perpetuation of the same harms and the creation of novel ones. In response, the
FATE community has emphasized the importance of anticipating harms. Our work
focuses on the anticipation of harms from increasingly agentic systems. Rather
than providing a definition of agency as a binary property, we identify 4 key
characteristics which, particularly in combination, tend to increase the agency
of a given algorithmic system: underspecification, directness of impact,
goal-directedness, and long-term planning. We also discuss important harms
which arise from increasing agency -- notably, these include systemic and/or
long-range impacts, often on marginalized stakeholders. We emphasize that
recognizing agency of algorithmic systems does not absolve or shift the human
responsibility for algorithmic harms. Rather, we use the term agency to
highlight the increasingly evident fact that ML systems are not fully under
human control. Our work explores increasingly agentic algorithmic systems in
three parts. First, we explain the notion of an increase in agency for
algorithmic systems in the context of diverse perspectives on agency across
disciplines. Second, we argue for the need to anticipate harms from
increasingly agentic systems. Third, we discuss important harms from
increasingly agentic systems and ways forward for addressing them. We conclude
by reflecting on implications of our work for anticipating algorithmic harms
from emerging systems.
Related papers
- Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - PsySafe: A Comprehensive Framework for Psychological-based Attack, Defense, and Evaluation of Multi-agent System Safety [70.84902425123406]
Multi-agent systems, when enhanced with Large Language Models (LLMs), exhibit profound capabilities in collective intelligence.
However, the potential misuse of this intelligence for malicious purposes presents significant risks.
We propose a framework (PsySafe) grounded in agent psychology, focusing on identifying how dark personality traits in agents can lead to risky behaviors.
Our experiments reveal several intriguing phenomena, such as the collective dangerous behaviors among agents, agents' self-reflection when engaging in dangerous behavior, and the correlation between agents' psychological assessments and dangerous behaviors.
arXiv Detail & Related papers (2024-01-22T12:11:55Z) - A Framework for Exploring the Consequences of AI-Mediated Enterprise Knowledge Access and Identifying Risks to Workers [3.4568218861862556]
This paper presents the Consequence-Mechanism-Risk framework to identify risks to workers from AI-mediated enterprise knowledge access systems.
We have drawn on wide-ranging literature detailing risks to workers, and categorised risks as being to worker value, power, and wellbeing.
Future work could apply this framework to other technological systems to promote the protection of workers and other groups.
arXiv Detail & Related papers (2023-12-08T17:05:40Z) - Fair Enough? A map of the current limitations of the requirements to have fair algorithms [43.609606707879365]
We argue that there is a hiatus between what the society is demanding from Automated Decision-Making systems, and what this demand actually means in real-world scenarios.
We outline the key features of such a hiatus and pinpoint a set of crucial open points that we as a society must address in order to give a concrete meaning to the increasing demand of fairness in Automated Decision-Making systems.
arXiv Detail & Related papers (2023-11-21T08:44:38Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Bias Impact Analysis of AI in Consumer Mobile Health Technologies:
Legal, Technical, and Policy [1.6114012813668934]
This work examines the intersection of algorithmic bias in consumer mobile health technologies (mHealth)
We explore what extent current mechanisms - legal, technical, and or normative - help mitigate potential risks associated with unwanted bias.
We provide additional guidance on the role and responsibilities technologists and policymakers have to ensure that such systems empower patients equitably.
arXiv Detail & Related papers (2022-08-29T00:15:45Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Multiscale Governance [0.0]
Humandemics will propagate because of the pathways that connect the different systems.
The emerging fragility or robustness of the system will depend on how this complex network of systems is governed.
arXiv Detail & Related papers (2021-04-06T19:23:44Z) - Overcoming Failures of Imagination in AI Infused System Development and
Deployment [71.9309995623067]
NeurIPS 2020 requested that research paper submissions include impact statements on "potential nefarious uses and the consequences of failure"
We argue that frameworks of harms must be context-aware and consider a wider range of potential stakeholders, system affordances, as well as viable proxies for assessing harms in the widest sense.
arXiv Detail & Related papers (2020-11-26T18:09:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.