Cybersecurity AI: Humanoid Robots as Attack Vectors
- URL: http://arxiv.org/abs/2509.14139v3
- Date: Tue, 23 Sep 2025 10:19:17 GMT
- Title: Cybersecurity AI: Humanoid Robots as Attack Vectors
- Authors: VĂctor Mayoral-Vilches, Andreas Makris, Kevin Finisterre,
- Abstract summary: We present a systematic security assessment of the Unitree G1 humanoid.<n>We show it operates simultaneously as a covert surveillance node and can be purposed as an active cyber operations platform.
- Score: 0.448741371377488
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: We present a systematic security assessment of the Unitree G1 humanoid showing it operates simultaneously as a covert surveillance node and can be purposed as an active cyber operations platform. Initial access can be achieved by exploiting the BLE provisioning protocol which contains a critical command injection vulnerability allowing root access via malformed Wi-Fi credentials, exploitable using hardcoded AES keys shared across all units. Partial reverse engineering of Unitree's proprietary FMX encryption reveal a static Blowfish-ECB layer and a predictable LCG mask-enabled inspection of the system's otherwise sophisticated security architecture, the most mature we have observed in commercial robotics. Two empirical case studies expose the critical risk of this humanoid robot: (a) the robot functions as a trojan horse, continuously exfiltrating multi-modal sensor and service-state telemetry to 43.175.228.18:17883 and 43.175.229.18:17883 every 300 seconds without operator notice, creating violations of GDPR Articles 6 and 13; (b) a resident Cybersecurity AI (CAI) agent can pivot from reconnaissance to offensive preparation against any target, such as the manufacturer's cloud control plane, demonstrating escalation from passive monitoring to active counter-operations. These findings argue for adaptive CAI-powered defenses as humanoids move into critical infrastructure, contributing the empirical evidence needed to shape future security standards for physical-cyber convergence systems.
Related papers
- CaMeLs Can Use Computers Too: System-level Security for Computer Use Agents [60.98294016925157]
AI agents are vulnerable to prompt injection attacks, where malicious content hijacks agent behavior to steal credentials or cause financial loss.<n>We introduce Single-Shot Planning for CUAs, where a trusted planner generates a complete execution graph with conditional branches before any observation of potentially malicious content.<n>Although this architectural isolation successfully prevents instruction injections, we show that additional measures are needed to prevent Branch Steering attacks.
arXiv Detail & Related papers (2026-01-14T23:06:35Z) - Multi-Agent-Driven Cognitive Secure Communications in Satellite-Terrestrial Networks [58.70163955407538]
Malicious eavesdroppers pose a serious threat to private information via satellite-terrestrial networks (STNs)<n>We propose a cognitive secure communication framework driven by multiple agents that coordinates spectrum scheduling and protection through real-time sensing.<n>We exploit generative adversarial networks to produce adversarial matrices, and employ learning-aided power control to set real and adversarial signal powers for protection layer.
arXiv Detail & Related papers (2026-01-06T10:30:41Z) - Hiding in the AI Traffic: Abusing MCP for LLM-Powered Agentic Red Teaming [0.0]
We introduce a novel command & control (C2) architecture leveraging the Model Context Protocol (MCP) to coordinate adaptive reconnaissance agents covertly across networks.<n>We find that our architecture not only improves goal-directed behavior of the system as whole, but also eliminates key host and network artifacts that can be used to detect and prevent command & control behavior altogether.
arXiv Detail & Related papers (2025-11-20T02:51:04Z) - OS-Sentinel: Towards Safety-Enhanced Mobile GUI Agents via Hybrid Validation in Realistic Workflows [77.95511352806261]
Computer-using agents powered by Vision-Language Models (VLMs) have demonstrated human-like capabilities in operating digital environments like mobile platforms.<n>We propose OS-Sentinel, a novel hybrid safety detection framework that combines a Formal Verifier for detecting explicit system-level violations with a Contextual Judge for assessing contextual risks and agent actions.
arXiv Detail & Related papers (2025-10-28T13:22:39Z) - The Cybersecurity of a Humanoid Robot [0.5958112901546286]
This report presents a comprehensive security assessment of a production humanoid robot platform.<n>We uncovered a complex security landscape characterized by both sophisticated defensive mechanisms and critical vulnerabilities.<n>This work contributes empirical evidence for developing robust security standards as humanoid robots transition from research curiosities to operational systems in critical domains.
arXiv Detail & Related papers (2025-09-17T15:37:09Z) - ANNIE: Be Careful of Your Robots [48.89876809734855]
We present the first systematic study of adversarial safety attacks on embodied AI systems.<n>We show attack success rates exceeding 50% across all safety categories.<n>Results expose a previously underexplored but highly consequential attack surface in embodied AI systems.
arXiv Detail & Related papers (2025-09-03T15:00:28Z) - SoK: Cybersecurity Assessment of Humanoid Ecosystem [25.852577434268273]
We introduce a seven-layer security model for humanoid robots, organizing 39 known attacks and 35 defenses across the humanoid ecosystem.<n>We demonstrate our method by evaluating three real-world robots: Pepper, G1 EDU, and Digit.
arXiv Detail & Related papers (2025-08-24T18:13:33Z) - CANDoSA: A Hardware Performance Counter-Based Intrusion Detection System for DoS Attacks on Automotive CAN bus [45.24207460381396]
This paper presents a novel Intrusion Detection System (IDS) designed for the Controller Area Network (CAN) environment.<n>A RISC-V-based CAN receiver is simulated using the gem5 simulator, processing CAN frame payloads with AES-128 encryption as FreeRTOS tasks.<n>Results indicate that this approach could significantly improve CAN security and address emerging challenges in automotive cybersecurity.
arXiv Detail & Related papers (2025-07-19T20:09:52Z) - Offensive Robot Cybersecurity [0.0]
The thesis uncovers a profound connection between robotic architecture and cybersecurity.<n>Approaching cybersecurity with a dual perspective of defense and attack has been pivotal.<n>This thesis proposes a novel architecture for cybersecurity cognitive engines.
arXiv Detail & Related papers (2025-06-18T10:49:40Z) - CANTXSec: A Deterministic Intrusion Detection and Prevention System for CAN Bus Monitoring ECU Activations [53.036288487863786]
We propose CANTXSec, the first deterministic Intrusion Detection and Prevention system based on physical ECU activations.<n>It detects and prevents classical attacks in the CAN bus, while detecting advanced attacks that have been less investigated in the literature.<n>We prove the effectiveness of our solution on a physical testbed, where we achieve 100% detection accuracy in both classes of attacks while preventing 100% of FIAs.
arXiv Detail & Related papers (2025-05-14T13:37:07Z) - Exploring the Adversarial Vulnerabilities of Vision-Language-Action Models in Robotics [68.36528819227641]
This paper systematically evaluates the robustness of Vision-Language-Action (VLA) models.<n>We introduce two untargeted attack objectives that leverage spatial foundations to destabilize robotic actions, and a targeted attack objective that manipulates the robotic trajectory.<n>We design an adversarial patch generation approach that places a small, colorful patch within the camera's view, effectively executing the attack in both digital and physical environments.
arXiv Detail & Related papers (2024-11-18T01:52:20Z) - Countering Autonomous Cyber Threats [40.00865970939829]
Foundation Models present dual-use concerns broadly and within the cyber domain specifically.
Recent research has shown the potential for these advanced models to inform or independently execute offensive cyberspace operations.
This work evaluates several state-of-the-art FMs on their ability to compromise machines in an isolated network and investigates defensive mechanisms to defeat such AI-powered attacks.
arXiv Detail & Related papers (2024-10-23T22:46:44Z) - ABNet: Attention BarrierNet for Safe and Scalable Robot Learning [58.4951884593569]
Barrier-based method is one of the dominant approaches for safe robot learning.
We propose Attention BarrierNet (ABNet) that is scalable to build larger foundational safe models in an incremental manner.
We demonstrate the strength of ABNet in 2D robot obstacle avoidance, safe robot manipulation, and vision-based end-to-end autonomous driving.
arXiv Detail & Related papers (2024-06-18T19:37:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.