SoK: Cybersecurity Assessment of Humanoid Ecosystem
- URL: http://arxiv.org/abs/2508.17481v2
- Date: Mon, 01 Sep 2025 16:04:07 GMT
- Title: SoK: Cybersecurity Assessment of Humanoid Ecosystem
- Authors: Priyanka Prakash Surve, Asaf Shabtai, Yuval Elovici,
- Abstract summary: We introduce a seven-layer security model for humanoid robots, organizing 39 known attacks and 35 defenses across the humanoid ecosystem.<n>We demonstrate our method by evaluating three real-world robots: Pepper, G1 EDU, and Digit.
- Score: 25.852577434268273
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humanoids are progressing toward practical deployment across healthcare, industrial, defense, and service sectors. While typically considered cyber-physical systems (CPSs), their dependence on traditional networked software stacks (e.g., Linux operating systems), robot operating system (ROS) middleware, and over-the-air update channels, creates a distinct security profile that exposes them to vulnerabilities conventional CPS models do not fully address. Prior studies have mainly examined specific threats, such as LiDAR spoofing or adversarial machine learning (AML). This narrow focus overlooks how an attack targeting one component can cascade harm throughout the robot's interconnected systems. We address this gap through a systematization of knowledge (SoK) that takes a comprehensive approach, consolidating fragmented research from robotics, CPS, and network security domains. We introduce a seven-layer security model for humanoid robots, organizing 39 known attacks and 35 defenses across the humanoid ecosystem-from hardware to human-robot interaction. Building on this security model, we develop a quantitative 39x35 attack-defense matrix with risk-weighted scoring, validated through Monte Carlo analysis. We demonstrate our method by evaluating three real-world robots: Pepper, G1 EDU, and Digit. The scoring analysis revealed varying security maturity levels, with scores ranging from 39.9% to 79.5% across the platforms. This work introduces a structured, evidence-based assessment method that enables systematic security evaluation, supports cross-platform benchmarking, and guides prioritization of security investments in humanoid robotics.
Related papers
- Toward Quantitative Modeling of Cybersecurity Risks Due to AI Misuse [50.87630846876635]
We develop nine detailed cyber risk models.<n>Each model decomposes attacks into steps using the MITRE ATT&CK framework.<n>Individual estimates are aggregated through Monte Carlo simulation.
arXiv Detail & Related papers (2025-12-09T17:54:17Z) - Cybersecurity AI: Humanoid Robots as Attack Vectors [0.448741371377488]
We present a systematic security assessment of the Unitree G1 humanoid.<n>We show it operates simultaneously as a covert surveillance node and can be purposed as an active cyber operations platform.
arXiv Detail & Related papers (2025-09-17T16:18:53Z) - The Cybersecurity of a Humanoid Robot [0.5958112901546286]
This report presents a comprehensive security assessment of a production humanoid robot platform.<n>We uncovered a complex security landscape characterized by both sophisticated defensive mechanisms and critical vulnerabilities.<n>This work contributes empirical evidence for developing robust security standards as humanoid robots transition from research curiosities to operational systems in critical domains.
arXiv Detail & Related papers (2025-09-17T15:37:09Z) - NeuroBreak: Unveil Internal Jailbreak Mechanisms in Large Language Models [68.09675063543402]
NeuroBreak is a top-down jailbreak analysis system designed to analyze neuron-level safety mechanisms and mitigate vulnerabilities.<n>By incorporating layer-wise representation probing analysis, NeuroBreak offers a novel perspective on the model's decision-making process.<n>We conduct quantitative evaluations and case studies to verify the effectiveness of our system.
arXiv Detail & Related papers (2025-09-04T08:12:06Z) - ANNIE: Be Careful of Your Robots [48.89876809734855]
We present the first systematic study of adversarial safety attacks on embodied AI systems.<n>We show attack success rates exceeding 50% across all safety categories.<n>Results expose a previously underexplored but highly consequential attack surface in embodied AI systems.
arXiv Detail & Related papers (2025-09-03T15:00:28Z) - Offensive Robot Cybersecurity [0.0]
The thesis uncovers a profound connection between robotic architecture and cybersecurity.<n>Approaching cybersecurity with a dual perspective of defense and attack has been pivotal.<n>This thesis proposes a novel architecture for cybersecurity cognitive engines.
arXiv Detail & Related papers (2025-06-18T10:49:40Z) - Implementing a Robot Intrusion Prevention System (RIPS) for ROS 2 [0.4613900711472571]
We have designed and implemented RIPS, an intrusion prevention system tailored for robotic applications based on ROS 2.<n>This manuscript provides a comprehensive exposition of the issue, the security aspects of ROS 2 applications, and the key points of the threat model we created for our robotic environment.
arXiv Detail & Related papers (2024-12-26T16:25:34Z) - On the Cyber-Physical Security of Commercial Indoor Delivery Robot Systems [20.204068275090243]
Indoor Delivery Robots (IDRs) play a vital role in the upcoming fourth industrial revolution, autonomously navigating and transporting items within indoor environments.<n>In this work, we aim to conduct the first security analysis of the IDR systems considering both cyber- and physical-layer attack surface and domain-specific attack goals across security, safety, and privacy.
arXiv Detail & Related papers (2024-12-14T06:12:10Z) - Exploring the Adversarial Vulnerabilities of Vision-Language-Action Models in Robotics [68.36528819227641]
This paper systematically evaluates the robustness of Vision-Language-Action (VLA) models.<n>We introduce two untargeted attack objectives that leverage spatial foundations to destabilize robotic actions, and a targeted attack objective that manipulates the robotic trajectory.<n>We design an adversarial patch generation approach that places a small, colorful patch within the camera's view, effectively executing the attack in both digital and physical environments.
arXiv Detail & Related papers (2024-11-18T01:52:20Z) - Can We Trust Embodied Agents? Exploring Backdoor Attacks against Embodied LLM-based Decision-Making Systems [27.316115171846953]
Large Language Models (LLMs) have shown significant promise in real-world decision-making tasks for embodied AI.<n>LLMs are fine-tuned to leverage their inherent common sense and reasoning abilities while being tailored to specific applications.<n>This fine-tuning process introduces considerable safety and security vulnerabilities, especially in safety-critical cyber-physical systems.
arXiv Detail & Related papers (2024-05-27T17:59:43Z) - On the Vulnerability of LLM/VLM-Controlled Robotics [54.57914943017522]
We highlight vulnerabilities in robotic systems integrating large language models (LLMs) and vision-language models (VLMs) due to input modality sensitivities.<n>Our results show that simple input perturbations reduce task execution success rates by 22.2% and 14.6% in two representative LLM/VLM-controlled robotic systems.
arXiv Detail & Related papers (2024-02-15T22:01:45Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.