DDoD: Dual Denial of Decision Attacks on Human-AI Teams
- URL: http://arxiv.org/abs/2212.03980v1
- Date: Wed, 7 Dec 2022 22:30:17 GMT
- Title: DDoD: Dual Denial of Decision Attacks on Human-AI Teams
- Authors: Benjamin Tag, Niels van Berkel, Sunny Verma, Benjamin Zi Hao Zhao,
Shlomo Berkovsky, Dali Kaafar, Vassilis Kostakos, Olga Ohrimenko
- Abstract summary: We propose textitDual Denial of Decision (DDoD) attacks against collaborative Human-AI teams.
We discuss how such attacks aim to deplete textitboth computational and human resources, and significantly impair decision-making capabilities.
- Score: 29.584936458736813
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Artificial Intelligence (AI) systems have been increasingly used to make
decision-making processes faster, more accurate, and more efficient. However,
such systems are also at constant risk of being attacked. While the majority of
attacks targeting AI-based applications aim to manipulate classifiers or
training data and alter the output of an AI model, recently proposed Sponge
Attacks against AI models aim to impede the classifier's execution by consuming
substantial resources. In this work, we propose \textit{Dual Denial of Decision
(DDoD) attacks against collaborative Human-AI teams}. We discuss how such
attacks aim to deplete \textit{both computational and human} resources, and
significantly impair decision-making capabilities. We describe DDoD on human
and computational resources and present potential risk scenarios in a series of
exemplary domains.
Related papers
- Unexploited Information Value in Human-AI Collaboration [23.353778024330165]
How to improve performance of a human-AI team is often not clear without knowing what particular information and strategies each agent employs.
We propose a model based in statistically decision theory to analyze human-AI collaboration.
arXiv Detail & Related papers (2024-11-03T01:34:45Z) - Generative Diffusion-based Contract Design for Efficient AI Twins Migration in Vehicular Embodied AI Networks [55.15079732226397]
Embodied AI is a rapidly advancing field that bridges the gap between cyberspace and physical space.
In VEANET, embodied AI twins act as in-vehicle AI assistants to perform diverse tasks supporting autonomous driving.
arXiv Detail & Related papers (2024-10-02T02:20:42Z) - A Survey on Offensive AI Within Cybersecurity [1.8206461789819075]
This survey paper on offensive AI will comprehensively cover various aspects related to attacks against and using AI systems.
It will delve into the impact of offensive AI practices on different domains, including consumer, enterprise, and public digital infrastructure.
The paper will explore adversarial machine learning, attacks against AI models, infrastructure, and interfaces, along with offensive techniques like information gathering, social engineering, and weaponized AI.
arXiv Detail & Related papers (2024-09-26T17:36:22Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Intent-aligned AI systems deplete human agency: the need for agency
foundations research in AI safety [2.3572498744567127]
We argue that alignment to human intent is insufficient for safe AI systems.
We argue that preservation of long-term agency of humans may be a more robust standard.
arXiv Detail & Related papers (2023-05-30T17:14:01Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Security and Privacy for Artificial Intelligence: Opportunities and
Challenges [11.368470074697747]
In recent years, most AI models are vulnerable to advanced and sophisticated hacking techniques.
This challenge has motivated concerted research efforts into adversarial AI.
We present a holistic cyber security review that demonstrates adversarial attacks against AI applications.
arXiv Detail & Related papers (2021-02-09T06:06:13Z) - Is the Most Accurate AI the Best Teammate? Optimizing AI for Teamwork [54.309495231017344]
We argue that AI systems should be trained in a human-centered manner, directly optimized for team performance.
We study this proposal for a specific type of human-AI teaming, where the human overseer chooses to either accept the AI recommendation or solve the task themselves.
Our experiments with linear and non-linear models on real-world, high-stakes datasets show that the most accuracy AI may not lead to highest team performance.
arXiv Detail & Related papers (2020-04-27T19:06:28Z) - On Adversarial Examples and Stealth Attacks in Artificial Intelligence
Systems [62.997667081978825]
We present a formal framework for assessing and analyzing two classes of malevolent action towards generic Artificial Intelligence (AI) systems.
The first class involves adversarial examples and concerns the introduction of small perturbations of the input data that cause misclassification.
The second class, introduced here for the first time and named stealth attacks, involves small perturbations to the AI system itself.
arXiv Detail & Related papers (2020-04-09T10:56:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.