On the Cyber-Physical Security of Commercial Indoor Delivery Robot Systems
- URL: http://arxiv.org/abs/2412.10699v1
- Date: Sat, 14 Dec 2024 06:12:10 GMT
- Title: On the Cyber-Physical Security of Commercial Indoor Delivery Robot Systems
- Authors: Fayzah Alshammari, Yunpeng Luo, Qi Alfred Chen,
- Abstract summary: Indoor Delivery Robots (IDRs) play a vital role in the upcoming fourth industrial revolution, autonomously navigating and transporting items within indoor environments.<n>In this work, we aim to conduct the first security analysis of the IDR systems considering both cyber- and physical-layer attack surface and domain-specific attack goals across security, safety, and privacy.
- Score: 20.204068275090243
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Indoor Delivery Robots (IDRs) play a vital role in the upcoming fourth industrial revolution, autonomously navigating and transporting items within indoor environments. In this work, we thus aim to conduct the first security analysis of the IDR systems considering both cyber- and physical-layer attack surface and domain-specific attack goals across security, safety, and privacy. As initial results, we formulated a general IDR system architecture from 40 commercial IDR models and then performed an initial cyber-physical attack entry point identification. We also performed an experimental analysis of a real commercial IDR robot-side software and identified several vulnerabilities. We then discuss future steps.
Related papers
- Procedimiento de auditoría de ciberseguridad para sistemas autónomos: metodología, amenazas y mitigaciones [0.1759008116536278]
This article presents a specific security auditing procedure for autonomous systems.<n>It is based on a layer-structured methodology, a threat taxonomy adapted to the robotic context, and concrete mitigation measures.
arXiv Detail & Related papers (2025-11-07T12:06:21Z) - The Cybersecurity of a Humanoid Robot [0.5958112901546286]
This report presents a comprehensive security assessment of a production humanoid robot platform.<n>We uncovered a complex security landscape characterized by both sophisticated defensive mechanisms and critical vulnerabilities.<n>This work contributes empirical evidence for developing robust security standards as humanoid robots transition from research curiosities to operational systems in critical domains.
arXiv Detail & Related papers (2025-09-17T15:37:09Z) - ANNIE: Be Careful of Your Robots [48.89876809734855]
We present the first systematic study of adversarial safety attacks on embodied AI systems.<n>We show attack success rates exceeding 50% across all safety categories.<n>Results expose a previously underexplored but highly consequential attack surface in embodied AI systems.
arXiv Detail & Related papers (2025-09-03T15:00:28Z) - SoK: Cybersecurity Assessment of Humanoid Ecosystem [25.852577434268273]
We introduce a seven-layer security model for humanoid robots, organizing 39 known attacks and 35 defenses across the humanoid ecosystem.<n>We demonstrate our method by evaluating three real-world robots: Pepper, G1 EDU, and Digit.
arXiv Detail & Related papers (2025-08-24T18:13:33Z) - Offensive Robot Cybersecurity [0.0]
The thesis uncovers a profound connection between robotic architecture and cybersecurity.<n>Approaching cybersecurity with a dual perspective of defense and attack has been pivotal.<n>This thesis proposes a novel architecture for cybersecurity cognitive engines.
arXiv Detail & Related papers (2025-06-18T10:49:40Z) - SafeAgent: Safeguarding LLM Agents via an Automated Risk Simulator [77.86600052899156]
Large Language Model (LLM)-based agents are increasingly deployed in real-world applications.<n>We propose AutoSafe, the first framework that systematically enhances agent safety through fully automated synthetic data generation.<n>We show that AutoSafe boosts safety scores by 45% on average and achieves a 28.91% improvement on real-world tasks.
arXiv Detail & Related papers (2025-05-23T10:56:06Z) - Safety and Security Risk Mitigation in Satellite Missions via Attack-Fault-Defense Trees [2.252059459291148]
This work presents a case study from Ascentio Technologies, a mission-critical system company in Argentina specializing in aerospace.
The main focus will be on the Ground Segment for the satellite project currently developed by the company.
This paper showcases the application of the Attack-Fault-Defense Tree framework, which integrates attack trees, fault trees, and defense mechanisms into a unified model.
arXiv Detail & Related papers (2025-04-01T17:24:43Z) - Implementing a Robot Intrusion Prevention System (RIPS) for ROS 2 [0.4613900711472571]
We have designed and implemented RIPS, an intrusion prevention system tailored for robotic applications based on ROS 2.
This manuscript provides a comprehensive exposition of the issue, the security aspects of ROS 2 applications, and the key points of the threat model we created for our robotic environment.
arXiv Detail & Related papers (2024-12-26T16:25:34Z) - VMGuard: Reputation-Based Incentive Mechanism for Poisoning Attack Detection in Vehicular Metaverse [52.57251742991769]
vehicular Metaverse guard (VMGuard) protects vehicular Metaverse systems from data poisoning attacks.
VMGuard implements a reputation-based incentive mechanism to assess the trustworthiness of participating SIoT devices.
Our system ensures that reliable SIoT devices, previously missclassified, are not barred from participating in future rounds of the market.
arXiv Detail & Related papers (2024-12-05T17:08:20Z) - Don't Let Your Robot be Harmful: Responsible Robotic Manipulation [57.70648477564976]
Unthinking execution of human instructions in robotic manipulation can lead to severe safety risks.<n>We present Safety-as-policy, which includes (i) a world model to automatically generate scenarios containing safety risks and conduct virtual interactions, and (ii) a mental model to infer consequences with reflections.<n>We show that Safety-as-policy can avoid risks and efficiently complete tasks in both synthetic dataset and real-world experiments.
arXiv Detail & Related papers (2024-11-27T12:27:50Z) - Exploring the Adversarial Vulnerabilities of Vision-Language-Action Models in Robotics [70.93622520400385]
This paper systematically quantifies the robustness of VLA-based robotic systems.
We introduce an untargeted position-aware attack objective that leverages spatial foundations to destabilize robotic actions.
We also design an adversarial patch generation approach that places a small, colorful patch within the camera's view, effectively executing the attack in both digital and physical environments.
arXiv Detail & Related papers (2024-11-18T01:52:20Z) - Defining and Evaluating Physical Safety for Large Language Models [62.4971588282174]
Large Language Models (LLMs) are increasingly used to control robotic systems such as drones.
Their risks of causing physical threats and harm in real-world applications remain unexplored.
We classify the physical safety risks of drones into four categories: (1) human-targeted threats, (2) object-targeted threats, (3) infrastructure attacks, and (4) regulatory violations.
arXiv Detail & Related papers (2024-11-04T17:41:25Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.<n>Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.<n>However, the deployment of these agents in physical environments presents significant safety challenges.<n>This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - A Survey on Privacy Attacks Against Digital Twin Systems in AI-Robotics [4.304994557797013]
Industry 4.0 has witnessed the rise of complex robots fueled by the integration of Artificial Intelligence/Machine Learning (AI/ML) and Digital Twin (DT) technologies.
This paper surveys privacy attacks targeting robots enabled by AI and DT models.
arXiv Detail & Related papers (2024-06-27T00:59:20Z) - Security Considerations in AI-Robotics: A Survey of Current Methods,
Challenges, and Opportunities [4.466887678364242]
Motivated by the need to address the security concerns in AI-Robotics systems, this paper presents a comprehensive survey and taxonomy across three dimensions.
We begin by surveying potential attack surfaces and provide mitigating defensive strategies.
We then delve into ethical issues, such as dependency and psychological impact, as well as the legal concerns regarding accountability for these systems.
arXiv Detail & Related papers (2023-10-12T17:54:20Z) - AI Security Threats against Pervasive Robotic Systems: A Course for Next
Generation Cybersecurity Workforce [0.9137554315375919]
Robotics, automation, and related Artificial Intelligence (AI) systems have become pervasive bringing in concerns related to security, safety, accuracy, and trust.
The security of these systems is becoming increasingly important to prevent cyber-attacks that could lead to privacy invasion, critical operations sabotage, and bodily harm.
This course description includes details about seven self-contained and adaptive modules on "AI security threats against pervasive robotic systems"
arXiv Detail & Related papers (2023-02-15T21:21:20Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.