Adversarial Interaction Attack: Fooling AI to Misinterpret Human
Intentions
- URL: http://arxiv.org/abs/2101.06704v1
- Date: Sun, 17 Jan 2021 16:23:20 GMT
- Title: Adversarial Interaction Attack: Fooling AI to Misinterpret Human
Intentions
- Authors: Nodens Koren, Qiuhong Ke, Yisen Wang, James Bailey, Xingjun Ma
- Abstract summary: We show that, despite their current huge success, deep learning based AI systems can be easily fooled by subtle adversarial noise.
Based on a case study of skeleton-based human interactions, we propose a novel adversarial attack on interactions.
Our study highlights potential risks in the interaction loop with AI and humans, which need to be carefully addressed when deploying AI systems in safety-critical applications.
- Score: 46.87576410532481
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Understanding the actions of both humans and artificial intelligence (AI)
agents is important before modern AI systems can be fully integrated into our
daily life. In this paper, we show that, despite their current huge success,
deep learning based AI systems can be easily fooled by subtle adversarial noise
to misinterpret the intention of an action in interaction scenarios. Based on a
case study of skeleton-based human interactions, we propose a novel adversarial
attack on interactions, and demonstrate how DNN-based interaction models can be
tricked to predict the participants' reactions in unexpected ways. From a
broader perspective, the scope of our proposed attack method is not confined to
problems related to skeleton data but can also be extended to any type of
problems involving sequential regressions. Our study highlights potential risks
in the interaction loop with AI and humans, which need to be carefully
addressed when deploying AI systems in safety-critical applications.
Related papers
- Human-AI Safety: A Descendant of Generative AI and Control Systems Safety [6.100304850888953]
We argue that meaningful safety assurances for advanced AI technologies require reasoning about how the feedback loop formed by AI outputs and human behavior may drive the interaction towards different outcomes.
We propose a concrete technical roadmap towards next-generation human-centered AI safety.
arXiv Detail & Related papers (2024-05-16T03:52:00Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - Agent AI: Surveying the Horizons of Multimodal Interaction [83.18367129924997]
"Agent AI" is a class of interactive systems that can perceive visual stimuli, language inputs, and other environmentally-grounded data.
We envision a future where people can easily create any virtual reality or simulated scene and interact with agents embodied within the virtual environment.
arXiv Detail & Related papers (2024-01-07T19:11:18Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Human-AI collaboration is not very collaborative yet: A taxonomy of interaction patterns in AI-assisted decision making from a systematic review [6.013543974938446]
Leveraging Artificial Intelligence in decision support systems has disproportionately focused on technological advancements.
A human-centered perspective attempts to alleviate this concern by designing AI solutions for seamless integration with existing processes.
arXiv Detail & Related papers (2023-10-30T17:46:38Z) - Human Uncertainty in Concept-Based AI Systems [37.82747673914624]
We study human uncertainty in the context of concept-based AI systems.
We show that training with uncertain concept labels may help mitigate weaknesses in concept-based systems.
arXiv Detail & Related papers (2023-03-22T19:17:57Z) - A Mental-Model Centric Landscape of Human-AI Symbiosis [31.14516396625931]
We introduce a significantly general version of human-aware AI interaction scheme, called generalized human-aware interaction (GHAI)
We will see how this new framework allows us to capture the various works done in the space of human-AI interaction and identify the fundamental behavioral patterns supported by these works.
arXiv Detail & Related papers (2022-02-18T22:08:08Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - eXtended Artificial Intelligence: New Prospects of Human-AI Interaction
Research [8.315174426992087]
The article provides a theoretical treatment and model of human-AI interaction based on an XR-AI continuum.
It shows why the combination of XR and AI fruitfully contributes to a valid and systematic investigation of human-AI interactions and interfaces.
The first experiment reveals an interesting gender effect in human-robot interaction, while the second experiment reveals an Eliza effect of a recommender system.
arXiv Detail & Related papers (2021-03-27T22:12:06Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.