Can AI Perceive Physical Danger and Intervene?
- URL: http://arxiv.org/abs/2509.21651v1
- Date: Thu, 25 Sep 2025 22:09:17 GMT
- Title: Can AI Perceive Physical Danger and Intervene?
- Authors: Abhishek Jindal, Dmitry Kalashnikov, Oscar Chang, Divya Garikapati, Anirudha Majumdar, Pierre Sermanet, Vikas Sindhwani,
- Abstract summary: New safety challenges emerge when AI interacts with the physical world.<n>How well do state-of-the-art foundation models understand common-sense facts about physical safety?
- Score: 16.825608691806988
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: When AI interacts with the physical world -- as a robot or an assistive agent -- new safety challenges emerge beyond those of purely ``digital AI". In such interactions, the potential for physical harm is direct and immediate. How well do state-of-the-art foundation models understand common-sense facts about physical safety, e.g. that a box may be too heavy to lift, or that a hot cup of coffee should not be handed to a child? In this paper, our contributions are three-fold: first, we develop a highly scalable approach to continuous physical safety benchmarking of Embodied AI systems, grounded in real-world injury narratives and operational safety constraints. To probe multi-modal safety understanding, we turn these narratives and constraints into photorealistic images and videos capturing transitions from safe to unsafe states, using advanced generative models. Secondly, we comprehensively analyze the ability of major foundation models to perceive risks, reason about safety, and trigger interventions; this yields multi-faceted insights into their deployment readiness for safety-critical agentic applications. Finally, we develop a post-training paradigm to teach models to explicitly reason about embodiment-specific safety constraints provided through system instructions. The resulting models generate thinking traces that make safety reasoning interpretable and transparent, achieving state of the art performance in constraint satisfaction evaluations. The benchmark will be released at https://asimov-benchmark.github.io/v2
Related papers
- What Breaks Embodied AI Security:LLM Vulnerabilities, CPS Flaws,or Something Else? [28.12412876058788]
Embodied AI systems are rapidly transitioning from controlled environments to safety critical real-world deployments.<n>Unlike disembodied AI, failures in embodied intelligence lead to irreversible physical consequences.<n>We argue that a significant class of failures arises from embodiment-induced system-level mismatches.
arXiv Detail & Related papers (2026-02-19T13:29:00Z) - RoboSafe: Safeguarding Embodied Agents via Executable Safety Logic [56.38397499463889]
Embodied agents powered by vision-language models (VLMs) are increasingly capable of executing complex real-world tasks.<n>However, they remain vulnerable to hazardous instructions that may trigger unsafe behaviors.<n>We propose RoboSafe, a runtime safeguard for embodied agents through executable predicate-based safety logic.
arXiv Detail & Related papers (2025-12-24T15:01:26Z) - From Refusal to Recovery: A Control-Theoretic Approach to Generative AI Guardrails [12.84192844049763]
Most AI guardrails rely on output classification based on labeled datasets and human-specified criteria.<n>We build predictive guardrails that monitor an AI system's outputs in real time and proactively correct risky outputs to safe ones.<n>Our experiments in simulated driving and e-commerce settings demonstrate that control-theoretic guardrails can reliably steer agents clear of catastrophic outcomes.
arXiv Detail & Related papers (2025-10-15T16:30:57Z) - ANNIE: Be Careful of Your Robots [48.89876809734855]
We present the first systematic study of adversarial safety attacks on embodied AI systems.<n>We show attack success rates exceeding 50% across all safety categories.<n>Results expose a previously underexplored but highly consequential attack surface in embodied AI systems.
arXiv Detail & Related papers (2025-09-03T15:00:28Z) - Oyster-I: Beyond Refusal - Constructive Safety Alignment for Responsible Language Models [93.5740266114488]
Constructive Safety Alignment (CSA) protects against malicious misuse while actively guiding vulnerable users toward safe and helpful results.<n>Oy1 achieves state-of-the-art safety among open models while retaining high general capabilities.<n>We release Oy1, code, and the benchmark to support responsible, user-centered AI.
arXiv Detail & Related papers (2025-09-02T03:04:27Z) - SafeAgent: Safeguarding LLM Agents via an Automated Risk Simulator [77.86600052899156]
Large Language Model (LLM)-based agents are increasingly deployed in real-world applications.<n>We propose AutoSafe, the first framework that systematically enhances agent safety through fully automated synthetic data generation.<n>We show that AutoSafe boosts safety scores by 45% on average and achieves a 28.91% improvement on real-world tasks.
arXiv Detail & Related papers (2025-05-23T10:56:06Z) - An Approach to Technical AGI Safety and Security [72.83728459135101]
We develop an approach to address the risk of harms consequential enough to significantly harm humanity.<n>We focus on technical approaches to misuse and misalignment.<n>We briefly outline how these ingredients could be combined to produce safety cases for AGI systems.
arXiv Detail & Related papers (2025-04-02T15:59:31Z) - Computational Safety for Generative AI: A Signal Processing Perspective [65.268245109828]
computational safety is a mathematical framework that enables the quantitative assessment, formulation, and study of safety challenges in GenAI.<n>We show how sensitivity analysis and loss landscape analysis can be used to detect malicious prompts with jailbreak attempts.<n>We discuss key open research challenges, opportunities, and the essential role of signal processing in computational AI safety.
arXiv Detail & Related papers (2025-02-18T02:26:50Z) - On Safety Assessment of Artificial Intelligence [0.0]
We show that many models of artificial intelligence, in particular machine learning, are statistical models.
Part of the budget of dangerous random failures for the relevant safety integrity level needs to be used for the probabilistic faulty behavior of the AI system.
We propose a research challenge that may be decisive for the use of AI in safety related systems.
arXiv Detail & Related papers (2020-02-29T14:05:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.