A New Paradigm of Threats in Robotics Behaviors
- URL: http://arxiv.org/abs/2103.13268v1
- Date: Wed, 24 Mar 2021 15:33:49 GMT
- Title: A New Paradigm of Threats in Robotics Behaviors
- Authors: Michele Colledanchise
- Abstract summary: We identify a new paradigm of security threats in the next generation of robots.
These threats fall beyond the known hardware or network-based ones.
We provide a taxonomy of attacks that exploit these vulnerabilities with realistic examples.
- Score: 4.873362301533825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robots applications in our daily life increase at an unprecedented pace. As
robots will soon operate "out in the wild", we must identify the safety and
security vulnerabilities they will face. Robotics researchers and manufacturers
focus their attention on new, cheaper, and more reliable applications. Still,
they often disregard the operability in adversarial environments where a
trusted or untrusted user can jeopardize or even alter the robot's task.
In this paper, we identify a new paradigm of security threats in the next
generation of robots. These threats fall beyond the known hardware or
network-based ones, and we must find new solutions to address them. These new
threats include malicious use of the robot's privileged access, tampering with
the robot sensors system, and tricking the robot's deliberation into harmful
behaviors. We provide a taxonomy of attacks that exploit these vulnerabilities
with realistic examples, and we outline effective countermeasures to prevent
better, detect, and mitigate them.
Related papers
- $π_0$: A Vision-Language-Action Flow Model for General Robot Control [77.32743739202543]
We propose a novel flow matching architecture built on top of a pre-trained vision-language model (VLM) to inherit Internet-scale semantic knowledge.
We evaluate our model in terms of its ability to perform tasks in zero shot after pre-training, follow language instructions from people, and its ability to acquire new skills via fine-tuning.
arXiv Detail & Related papers (2024-10-31T17:22:30Z) - Jailbreaking LLM-Controlled Robots [82.04590367171932]
Large language models (LLMs) have revolutionized the field of robotics by enabling contextual reasoning and intuitive human-robot interaction.
LLMs are vulnerable to jailbreaking attacks, wherein malicious prompters elicit harmful text by bypassing LLM safety guardrails.
We introduce RoboPAIR, the first algorithm designed to jailbreak LLM-controlled robots.
arXiv Detail & Related papers (2024-10-17T15:55:36Z) - Is Generative AI the Next Tactical Cyber Weapon For Threat Actors? Unforeseen Implications of AI Generated Cyber Attacks [0.0]
This paper delves into the escalating threat posed by the misuse of AI, specifically through the use of Large Language Models (LLMs)
Through a series of controlled experiments, the paper demonstrates how these models can be manipulated to bypass ethical and privacy safeguards to effectively generate cyber attacks.
We also introduce Occupy AI, a customized, finetuned LLM specifically engineered to automate and execute cyberattacks.
arXiv Detail & Related papers (2024-08-23T02:56:13Z) - Unifying 3D Representation and Control of Diverse Robots with a Single Camera [48.279199537720714]
We introduce Neural Jacobian Fields, an architecture that autonomously learns to model and control robots from vision alone.
Our approach achieves accurate closed-loop control and recovers the causal dynamic structure of each robot.
arXiv Detail & Related papers (2024-07-11T17:55:49Z) - ABNet: Attention BarrierNet for Safe and Scalable Robot Learning [58.4951884593569]
Barrier-based method is one of the dominant approaches for safe robot learning.
We propose Attention BarrierNet (ABNet) that is scalable to build larger foundational safe models in an incremental manner.
We demonstrate the strength of ABNet in 2D robot obstacle avoidance, safe robot manipulation, and vision-based end-to-end autonomous driving.
arXiv Detail & Related papers (2024-06-18T19:37:44Z) - Cybersecurity and Embodiment Integrity for Modern Robots: A Conceptual Framework [3.29295880899738]
We show how cyberattacks on different devices can have radically different consequences on the robot's ability to complete its tasks.
We also claim that modern robots should have self-awareness for what it concerns such aspects.
We show that achieving these propositions requires that robots possess at least three properties that conceptually link devices and tasks.
arXiv Detail & Related papers (2024-01-15T15:46:38Z) - AI Security Threats against Pervasive Robotic Systems: A Course for Next
Generation Cybersecurity Workforce [0.9137554315375919]
Robotics, automation, and related Artificial Intelligence (AI) systems have become pervasive bringing in concerns related to security, safety, accuracy, and trust.
The security of these systems is becoming increasingly important to prevent cyber-attacks that could lead to privacy invasion, critical operations sabotage, and bodily harm.
This course description includes details about seven self-contained and adaptive modules on "AI security threats against pervasive robotic systems"
arXiv Detail & Related papers (2023-02-15T21:21:20Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z) - REvolveR: Continuous Evolutionary Models for Robot-to-robot Policy
Transfer [57.045140028275036]
We consider the problem of transferring a policy across two different robots with significantly different parameters such as kinematics and morphology.
Existing approaches that train a new policy by matching the action or state transition distribution, including imitation learning methods, fail due to optimal action and/or state distribution being mismatched in different robots.
We propose a novel method named $REvolveR$ of using continuous evolutionary models for robotic policy transfer implemented in a physics simulator.
arXiv Detail & Related papers (2022-02-10T18:50:25Z) - RoboMal: Malware Detection for Robot Network Systems [4.357338639836869]
We propose the RoboMal framework of static malware detection on binary executables to detect malware before it gets a chance to execute.
The framework is compared against widely used supervised learning models: GRU, CNN, and ANN.
Notably, the LSTM-based RoboMal model outperforms the other models with an accuracy of 85% and precision of 87% in 10-fold cross-validation.
arXiv Detail & Related papers (2022-01-20T22:11:38Z) - Fault-Aware Robust Control via Adversarial Reinforcement Learning [35.16413579212691]
We propose an adversarial reinforcement learning framework, which significantly increases robot fragility over joint damage cases.
We validate our algorithm on a three-fingered robot hand and a quadruped robot.
Our algorithm can be trained only in simulation and directly deployed on a real robot without any fine-tuning.
arXiv Detail & Related papers (2020-11-17T16:01:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.