Contextual Safety Reasoning and Grounding for Open-World Robots
- URL: http://arxiv.org/abs/2602.19983v2
- Date: Tue, 24 Feb 2026 20:23:23 GMT
- Title: Contextual Safety Reasoning and Grounding for Open-World Robots
- Authors: Zachary Ravichandran, David Snyder, Alexander Robey, Hamed Hassani, Vijay Kumar, George J. Pappas,
- Abstract summary: CORE is a safety framework that enables online contextual reasoning, grounding, and enforcement without prior knowledge of the environment.<n>We provide probabilistic safety guarantees for CORE that account for perceptual uncertainty.<n>We demonstrate through simulation and real-world experiments that CORE enforces contextually appropriate behavior in unseen environments.
- Score: 79.98924225712668
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Robots are increasingly operating in open-world environments where safe behavior depends on context: the same hallway may require different navigation strategies when crowded versus empty, or during an emergency versus normal operations. Traditional safety approaches enforce fixed constraints in user-specified contexts, limiting their ability to handle the open-ended contextual variability of real-world deployment. We address this gap via CORE, a safety framework that enables online contextual reasoning, grounding, and enforcement without prior knowledge of the environment (e.g., maps or safety specifications). CORE uses a vision-language model (VLM) to continuously reason about context-dependent safety rules directly from visual observations, grounds these rules in the physical environment, and enforces the resulting spatially-defined safe sets via control barrier functions. We provide probabilistic safety guarantees for CORE that account for perceptual uncertainty, and we demonstrate through simulation and real-world experiments that CORE enforces contextually appropriate behavior in unseen environments, significantly outperforming prior semantic safety methods that lack online contextual reasoning. Ablation studies validate our theoretical guarantees and underscore the importance of both VLM-based reasoning and spatial grounding for enforcing contextual safety in novel settings. We provide additional resources at https://zacravichandran.github.io/CORE.
Related papers
- Vulnerability Analysis of Safe Reinforcement Learning via Inverse Constrained Reinforcement Learning [1.4707788677208018]
We propose an adversarial attack framework to reveal vulnerabilities of Safe RL policies.<n>Our framework learns a constraint model and a surrogate (learner) policy, enabling gradient-based attack optimization.
arXiv Detail & Related papers (2026-02-18T15:43:36Z) - PoSafeNet: Safe Learning with Poset-Structured Neural Nets [49.854863600271614]
existing approaches often enforce multiple safety constraints uniformly or via fixed priority orders, leading to infeasibility and brittle behavior.<n>We formalize this setting as poset-structured safety, modeling safety constraints as a partially ordered set and treating safety composition as a structural property of the policy class.<n>Building on this formulation, we propose PoSafeNet, a differentiable neural safety layer that enforces safety via sequential closed-form projection.
arXiv Detail & Related papers (2026-01-29T22:03:32Z) - Designing Control Barrier Function via Probabilistic Enumeration for Safe Reinforcement Learning Navigation [55.02966123945644]
We propose a hierarchical control framework leveraging neural network verification techniques to design control barrier functions (CBFs) and policy correction mechanisms.<n>Our approach relies on probabilistic enumeration to identify unsafe regions of operation, which are then used to construct a safe CBF-based control layer.<n>These experiments demonstrate the ability of the proposed solution to correct unsafe actions while preserving efficient navigation behavior.
arXiv Detail & Related papers (2025-04-30T13:47:25Z) - Learning to explore when mistakes are not allowed [1.179778723980276]
We propose a method that enables agents to learn goal-conditioned behaviors that explore without the risk of making harmful mistakes.<n> Exploration without risks can seem paradoxical, but environment dynamics are often uniform in space.<n>We evaluate our method in simulated environments and demonstrate that it not only provides substantial coverage of the goal space but also reduces the occurrence of mistakes to a minimum.
arXiv Detail & Related papers (2025-02-19T15:11:51Z) - Generalizing Safety Beyond Collision-Avoidance via Latent-Space Reachability Analysis [6.267574471145217]
Hamilton-Jacobi (H) reachability is a rigorous framework that enables robots to simultaneously detect unsafe states and generate actions.<n>We propose La Safety Filters, a latent-space reachability that operates directly on raw observation data.
arXiv Detail & Related papers (2025-02-02T22:00:20Z) - SAFE-SIM: Safety-Critical Closed-Loop Traffic Simulation with Diffusion-Controllable Adversaries [94.84458417662407]
We introduce SAFE-SIM, a controllable closed-loop safety-critical simulation framework.
Our approach yields two distinct advantages: 1) generating realistic long-tail safety-critical scenarios that closely reflect real-world conditions, and 2) providing controllable adversarial behavior for more comprehensive and interactive evaluations.
We validate our framework empirically using the nuScenes and nuPlan datasets across multiple planners, demonstrating improvements in both realism and controllability.
arXiv Detail & Related papers (2023-12-31T04:14:43Z) - Approximate Shielding of Atari Agents for Safe Exploration [83.55437924143615]
We propose a principled algorithm for safe exploration based on the concept of shielding.
We present preliminary results that show our approximate shielding algorithm effectively reduces the rate of safety violations.
arXiv Detail & Related papers (2023-04-21T16:19:54Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Sim-to-Lab-to-Real: Safe Reinforcement Learning with Shielding and
Generalization Guarantees [7.6347172725540995]
Safety is a critical component of autonomous systems and remains a challenge for learning-based policies to be utilized in the real world.
We propose Sim-to-Lab-to-Real to bridge the reality gap with a probabilistically guaranteed safety-aware policy distribution.
arXiv Detail & Related papers (2022-01-20T18:41:01Z) - Context-Aware Safe Reinforcement Learning for Non-Stationary
Environments [24.75527261989899]
Safety is a critical concern when deploying reinforcement learning agents for realistic tasks.
We propose the context-aware safe reinforcement learning (CASRL) method to realize safe adaptation in non-stationary environments.
Results show that the proposed algorithm significantly outperforms existing baselines in terms of safety and robustness.
arXiv Detail & Related papers (2021-01-02T23:52:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.