Neural Configuration-Space Barriers for Manipulation Planning and Control
- URL: http://arxiv.org/abs/2503.04929v1
- Date: Thu, 06 Mar 2025 20:00:56 GMT
- Title: Neural Configuration-Space Barriers for Manipulation Planning and Control
- Authors: Kehan Long, Ki Myung Brian Lee, Nikola Raicevic, Niyas Attasseri, Melvin Leok, Nikolay Atanasov,
- Abstract summary: Planning and control for high-dimensional robot manipulators in cluttered, dynamic environments require both computational efficiency and robust safety guarantees.<n>We propose a unified framework for motion planning and control that formulates safety constraints as CDF barriers.<n>A CDF barrier approximates the local free configuration space, substantially reducing the number of collision-checking operations during motion planning.
- Score: 14.758869638207251
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Planning and control for high-dimensional robot manipulators in cluttered, dynamic environments require both computational efficiency and robust safety guarantees. Inspired by recent advances in learning configuration-space distance functions (CDFs) as robot body representations, we propose a unified framework for motion planning and control that formulates safety constraints as CDF barriers. A CDF barrier approximates the local free configuration space, substantially reducing the number of collision-checking operations during motion planning. However, learning a CDF barrier with a neural network and relying on online sensor observations introduce uncertainties that must be considered during control synthesis. To address this, we develop a distributionally robust CDF barrier formulation for control that explicitly accounts for modeling errors and sensor noise without assuming a known underlying distribution. Simulations and hardware experiments on a 6-DoF xArm manipulator show that our neural CDF barrier formulation enables efficient planning and robust real-time safe control in cluttered and dynamic environments, relying only on onboard point-cloud observations.
Related papers
- Designing Control Barrier Function via Probabilistic Enumeration for Safe Reinforcement Learning Navigation [55.02966123945644]
We propose a hierarchical control framework leveraging neural network verification techniques to design control barrier functions (CBFs) and policy correction mechanisms.
Our approach relies on probabilistic enumeration to identify unsafe regions of operation, which are then used to construct a safe CBF-based control layer.
These experiments demonstrate the ability of the proposed solution to correct unsafe actions while preserving efficient navigation behavior.
arXiv Detail & Related papers (2025-04-30T13:47:25Z) - Soft Actor-Critic-based Control Barrier Adaptation for Robust Autonomous Navigation in Unknown Environments [4.788163807490197]
We propose a Soft Actor-Critic (SAC)-based policy for adapting Control Barrier Function (CBF) constraint parameters at runtime.
We demonstrate that our framework effectively adapts CBF constraints, enabling the robot to reach its final goal without compromising safety.
arXiv Detail & Related papers (2025-03-11T14:33:55Z) - Diffusion Predictive Control with Constraints [51.91057765703533]
Diffusion predictive control with constraints (DPCC)<n>An algorithm for diffusion-based control with explicit state and action constraints that can deviate from those in the training data.<n>We show through simulations of a robot manipulator that DPCC outperforms existing methods in satisfying novel test-time constraints while maintaining performance on the learned control task.
arXiv Detail & Related papers (2024-12-12T15:10:22Z) - Learning for Layered Safety-Critical Control with Predictive Control Barrier Functions [26.692688699287213]
Safety filters leveraging control barrier functions (CBFs) are highly effective for enforcing safe behavior on complex systems.
gaps between the RoM and FoM can result in safety violations.
This paper introduces emphpredictive CBFs to address this gap by leveraging rollouts of the FoM to define a predictive robustness term.
arXiv Detail & Related papers (2024-12-05T23:05:25Z) - Custom Non-Linear Model Predictive Control for Obstacle Avoidance in Indoor and Outdoor Environments [0.0]
This paper introduces a Non-linear Model Predictive Control (NMPC) framework for the DJI Matrice 100.
The framework supports various trajectory types and employs a penalty-based cost function for control accuracy in tight maneuvers.
arXiv Detail & Related papers (2024-10-03T17:50:19Z) - Fault Tolerant Neural Control Barrier Functions for Robotic Systems
under Sensor Faults and Attacks [6.314000948709254]
We study safety-critical control synthesis for robotic systems under sensor faults and attacks.
Our main contribution is the development and synthesis of a new class of CBFs that we term fault tolerant neural control barrier function (FT-NCBF)
arXiv Detail & Related papers (2024-02-28T19:44:19Z) - Integrating DeepRL with Robust Low-Level Control in Robotic Manipulators for Non-Repetitive Reaching Tasks [0.24578723416255746]
In robotics, contemporary strategies are learning-based, characterized by a complex black-box nature and a lack of interpretability.
We propose integrating a collision-free trajectory planner based on deep reinforcement learning (DRL) with a novel auto-tuning low-level control strategy.
arXiv Detail & Related papers (2024-02-04T15:54:03Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - BarrierNet: A Safety-Guaranteed Layer for Neural Networks [50.86816322277293]
BarrierNet allows the safety constraints of a neural controller be adaptable to changing environments.
We evaluate them on a series of control problems such as traffic merging and robot navigations in 2D and 3D space.
arXiv Detail & Related papers (2021-11-22T15:38:11Z) - Learning Robust Output Control Barrier Functions from Safe Expert Demonstrations [50.37808220291108]
This paper addresses learning safe output feedback control laws from partial observations of expert demonstrations.
We first propose robust output control barrier functions (ROCBFs) as a means to guarantee safety.
We then formulate an optimization problem to learn ROCBFs from expert demonstrations that exhibit safe system behavior.
arXiv Detail & Related papers (2021-11-18T23:21:00Z) - Pointwise Feasibility of Gaussian Process-based Safety-Critical Control
under Model Uncertainty [77.18483084440182]
Control Barrier Functions (CBFs) and Control Lyapunov Functions (CLFs) are popular tools for enforcing safety and stability of a controlled system, respectively.
We present a Gaussian Process (GP)-based approach to tackle the problem of model uncertainty in safety-critical controllers that use CBFs and CLFs.
arXiv Detail & Related papers (2021-06-13T23:08:49Z) - Learning Control Barrier Functions from Expert Demonstrations [69.23675822701357]
We propose a learning based approach to safe controller synthesis based on control barrier functions (CBFs)
We analyze an optimization-based approach to learning a CBF that enjoys provable safety guarantees under suitable Lipschitz assumptions on the underlying dynamical system.
To the best of our knowledge, these are the first results that learn provably safe control barrier functions from data.
arXiv Detail & Related papers (2020-04-07T12:29:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.