Overcoming Failures of Imagination in AI Infused System Development and
Deployment
- URL: http://arxiv.org/abs/2011.13416v3
- Date: Thu, 10 Dec 2020 08:51:12 GMT
- Title: Overcoming Failures of Imagination in AI Infused System Development and
Deployment
- Authors: Margarita Boyarskaya, Alexandra Olteanu, Kate Crawford
- Abstract summary: NeurIPS 2020 requested that research paper submissions include impact statements on "potential nefarious uses and the consequences of failure"
We argue that frameworks of harms must be context-aware and consider a wider range of potential stakeholders, system affordances, as well as viable proxies for assessing harms in the widest sense.
- Score: 71.9309995623067
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: NeurIPS 2020 requested that research paper submissions include impact
statements on "potential nefarious uses and the consequences of failure."
However, as researchers, practitioners and system designers, a key challenge to
anticipating risks is overcoming what Clarke (1962) called 'failures of
imagination.' The growing research on bias, fairness, and transparency in
computational systems aims to illuminate and mitigate harms, and could thus
help inform reflections on possible negative impacts of particular pieces of
technical work. The prevalent notion of computational harms -- narrowly
construed as either allocational or representational harms -- does not fully
capture the open, context dependent, and unobservable nature of harms across
the wide range of AI infused systems.The current literature focuses on a small
range of examples of harms to motivate algorithmic fixes, overlooking the wider
scope of probable harms and the way these harms might affect different
stakeholders. The system affordances may also exacerbate harms in unpredictable
ways, as they determine stakeholders' control(including of non-users) over how
they use and interact with a system output. To effectively assist in
anticipating harmful uses, we argue that frameworks of harms must be
context-aware and consider a wider range of potential stakeholders, system
affordances, as well as viable proxies for assessing harms in the widest sense.
Related papers
- Risks and NLP Design: A Case Study on Procedural Document QA [52.557503571760215]
We argue that clearer assessments of risks and harms to users will be possible when we specialize the analysis to more concrete applications and their plausible users.
We conduct a risk-oriented error analysis that could then inform the design of a future system to be deployed with lower risk of harm and better performance.
arXiv Detail & Related papers (2024-08-16T17:23:43Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - The Reasoning Under Uncertainty Trap: A Structural AI Risk [0.0]
Report provides an exposition of what makes RUU so challenging for both humans and machines.
We detail how this misuse risk connects to a wider network of underlying structural risks.
arXiv Detail & Related papers (2024-01-29T17:16:57Z) - A Framework for Exploring the Consequences of AI-Mediated Enterprise Knowledge Access and Identifying Risks to Workers [3.4568218861862556]
This paper presents the Consequence-Mechanism-Risk framework to identify risks to workers from AI-mediated enterprise knowledge access systems.
We have drawn on wide-ranging literature detailing risks to workers, and categorised risks as being to worker value, power, and wellbeing.
Future work could apply this framework to other technological systems to promote the protection of workers and other groups.
arXiv Detail & Related papers (2023-12-08T17:05:40Z) - Harms from Increasingly Agentic Algorithmic Systems [21.613581713046464]
Research in Fairness, Accountability, Transparency, and Ethics (FATE) has established many sources and forms of algorithmic harm.
Despite ongoing harms, new systems are being developed and deployed which threaten the perpetuation of the same harms.
arXiv Detail & Related papers (2023-02-20T21:42:41Z) - Joint Multisided Exposure Fairness for Recommendation [76.75990595228666]
This paper formalizes a family of exposure fairness metrics that model the problem jointly from the perspective of both the consumers and producers.
Specifically, we consider group attributes for both types of stakeholders to identify and mitigate fairness concerns that go beyond individual users and items towards more systemic biases in recommendation.
arXiv Detail & Related papers (2022-04-29T19:13:23Z) - Unpacking the Expressed Consequences of AI Research in Broader Impact
Statements [23.3030110636071]
We present the results of a thematic analysis of a sample of statements written for the 2020 Neural Information Processing Systems conference.
The themes we identify fall into categories related to how consequences are expressed and areas of impacts expressed.
In light of our results, we offer perspectives on how the broader impact statement can be implemented in future iterations to better align with potential goals.
arXiv Detail & Related papers (2021-05-11T02:57:39Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z) - Avoiding Negative Side Effects due to Incomplete Knowledge of AI Systems [35.763408055286355]
Learning to recognize and avoid negative side effects of an agent's actions is critical to improve the safety and reliability of autonomous systems.
Mitigating negative side effects is an emerging research topic that is attracting increased attention due to the rapid growth in the deployment of AI systems.
This article provides a comprehensive overview of different forms of negative side effects and the recent research efforts to address them.
arXiv Detail & Related papers (2020-08-24T16:48:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.