One Filter to Deploy Them All: Robust Safety for Quadrupedal Navigation in Unknown Environments
- URL: http://arxiv.org/abs/2412.09989v1
- Date: Fri, 13 Dec 2024 09:21:02 GMT
- Title: One Filter to Deploy Them All: Robust Safety for Quadrupedal Navigation in Unknown Environments
- Authors: Albert Lin, Shuang Peng, Somil Bansal,
- Abstract summary: We propose an observation-conditioned reachability-based (OCR) safety-filter framework.<n>Our key idea is to use an OCR value network (OCR-VN) that predicts the optimal control-theoretic safety value function for new failure regions.<n>We demonstrate that the proposed framework can automatically safeguard a wide range of hierarchical quadruped controllers.
- Score: 10.103090762559708
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As learning-based methods for legged robots rapidly grow in popularity, it is important that we can provide safety assurances efficiently across different controllers and environments. Existing works either rely on a priori knowledge of the environment and safety constraints to ensure system safety or provide assurances for a specific locomotion policy. To address these limitations, we propose an observation-conditioned reachability-based (OCR) safety-filter framework. Our key idea is to use an OCR value network (OCR-VN) that predicts the optimal control-theoretic safety value function for new failure regions and dynamic uncertainty during deployment time. Specifically, the OCR-VN facilitates rapid safety adaptation through two key components: a LiDAR-based input that allows the dynamic construction of safe regions in light of new obstacles and a disturbance estimation module that accounts for dynamics uncertainty in the wild. The predicted safety value function is used to construct an adaptive safety filter that overrides the nominal quadruped controller when necessary to maintain safety. Through simulation studies and hardware experiments on a Unitree Go1 quadruped, we demonstrate that the proposed framework can automatically safeguard a wide range of hierarchical quadruped controllers, adapts to novel environments, and is robust to unmodeled dynamics without a priori access to the controllers or environments - hence, "One Filter to Deploy Them All". The experiment videos can be found on the project website.
Related papers
- Designing Control Barrier Function via Probabilistic Enumeration for Safe Reinforcement Learning Navigation [55.02966123945644]
We propose a hierarchical control framework leveraging neural network verification techniques to design control barrier functions (CBFs) and policy correction mechanisms.
Our approach relies on probabilistic enumeration to identify unsafe regions of operation, which are then used to construct a safe CBF-based control layer.
These experiments demonstrate the ability of the proposed solution to correct unsafe actions while preserving efficient navigation behavior.
arXiv Detail & Related papers (2025-04-30T13:47:25Z) - SafeCast: Risk-Responsive Motion Forecasting for Autonomous Vehicles [12.607007386467329]
We present SafeCast, a risk-responsive motion forecasting model.
It integrates safety-aware decision-making with uncertainty-aware adaptability.
Our model achieves state-of-the-art (SOTA) accuracy while maintaining a lightweight architecture and low inference latency.
arXiv Detail & Related papers (2025-03-28T15:38:21Z) - Soft Actor-Critic-based Control Barrier Adaptation for Robust Autonomous Navigation in Unknown Environments [4.788163807490197]
We propose a Soft Actor-Critic (SAC)-based policy for adapting Control Barrier Function (CBF) constraint parameters at runtime.
We demonstrate that our framework effectively adapts CBF constraints, enabling the robot to reach its final goal without compromising safety.
arXiv Detail & Related papers (2025-03-11T14:33:55Z) - Safe Deep Policy Adaptation [7.2747306035142225]
Policy adaptation based on reinforcement learning (RL) offers versatility and generalizability but presents safety and robustness challenges.
We propose SafeDPA, a novel RL and control framework that simultaneously tackles the problems of policy adaptation and safe reinforcement learning.
We provide theoretical safety guarantees of SafeDPA and show the robustness of SafeDPA against learning errors and extra perturbations.
arXiv Detail & Related papers (2023-10-08T00:32:59Z) - CaRT: Certified Safety and Robust Tracking in Learning-based Motion
Planning for Multi-Agent Systems [7.77024796789203]
CaRT is a new hierarchical, distributed architecture to guarantee the safety and robustness of a learning-based motion planning policy.
We show that CaRT guarantees safety and the exponentialness of the trajectory tracking error, even under the presence of deterministic and bounded disturbance.
We demonstrate the effectiveness of CaRT in several examples of nonlinear motion planning and control problems, including optimal, multi-spacecraft reconfiguration.
arXiv Detail & Related papers (2023-07-13T21:51:29Z) - Meta-Learning Priors for Safe Bayesian Optimization [72.8349503901712]
We build on a meta-learning algorithm, F-PACOH, capable of providing reliable uncertainty quantification in settings of data scarcity.
As core contribution, we develop a novel framework for choosing safety-compliant priors in a data-riven manner.
On benchmark functions and a high-precision motion system, we demonstrate that our meta-learned priors accelerate the convergence of safe BO approaches.
arXiv Detail & Related papers (2022-10-03T08:38:38Z) - Enforcing Hard Constraints with Soft Barriers: Safe Reinforcement
Learning in Unknown Stochastic Environments [84.3830478851369]
We propose a safe reinforcement learning approach that can jointly learn the environment and optimize the control policy.
Our approach can effectively enforce hard safety constraints and significantly outperform CMDP-based baseline methods in system safe rate measured via simulations.
arXiv Detail & Related papers (2022-09-29T20:49:25Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - BarrierNet: A Safety-Guaranteed Layer for Neural Networks [50.86816322277293]
BarrierNet allows the safety constraints of a neural controller be adaptable to changing environments.
We evaluate them on a series of control problems such as traffic merging and robot navigations in 2D and 3D space.
arXiv Detail & Related papers (2021-11-22T15:38:11Z) - Pointwise Feasibility of Gaussian Process-based Safety-Critical Control
under Model Uncertainty [77.18483084440182]
Control Barrier Functions (CBFs) and Control Lyapunov Functions (CLFs) are popular tools for enforcing safety and stability of a controlled system, respectively.
We present a Gaussian Process (GP)-based approach to tackle the problem of model uncertainty in safety-critical controllers that use CBFs and CLFs.
arXiv Detail & Related papers (2021-06-13T23:08:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.