Safe Multi-Agent Interaction through Robust Control Barrier Functions
with Learned Uncertainties
- URL: http://arxiv.org/abs/2004.05273v2
- Date: Tue, 22 Sep 2020 18:37:44 GMT
- Title: Safe Multi-Agent Interaction through Robust Control Barrier Functions
with Learned Uncertainties
- Authors: Richard Cheng, Mohammad Javad Khojasteh, Aaron D. Ames, and Joel W.
Burdick
- Abstract summary: Multi-Agent Control Barrier Functions (CBF) have emerged as a computationally efficient tool to guarantee safety in multi-agent environments.
This work aims to learn high-confidence bounds for these dynamic uncertainties using Matrix-Variate Gaussian Process models.
We transform the resulting min-max robust CBF into a quadratic program, which can be efficiently solved in real time.
- Score: 36.587645093055926
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robots operating in real world settings must navigate and maintain safety
while interacting with many heterogeneous agents and obstacles. Multi-Agent
Control Barrier Functions (CBF) have emerged as a computationally efficient
tool to guarantee safety in multi-agent environments, but they assume perfect
knowledge of both the robot dynamics and other agents' dynamics. While
knowledge of the robot's dynamics might be reasonably well known, the
heterogeneity of agents in real-world environments means there will always be
considerable uncertainty in our prediction of other agents' dynamics. This work
aims to learn high-confidence bounds for these dynamic uncertainties using
Matrix-Variate Gaussian Process models, and incorporates them into a robust
multi-agent CBF framework. We transform the resulting min-max robust CBF into a
quadratic program, which can be efficiently solved in real time. We verify via
simulation results that the nominal multi-agent CBF is often violated during
agent interactions, whereas our robust formulation maintains safety with a much
higher probability and adapts to learned uncertainties
Related papers
- Learning responsibility allocations for multi-agent interactions: A differentiable optimization approach with control barrier functions [12.074590482085831]
We seek to codify factors governing safe multi-agent interactions via the lens of responsibility.
We propose a data-driven modeling approach based on control barrier functions and differentiable optimization.
arXiv Detail & Related papers (2024-10-09T20:20:41Z) - HAICOSYSTEM: An Ecosystem for Sandboxing Safety Risks in Human-AI Interactions [76.42274173122328]
We present HAICOSYSTEM, a framework examining AI agent safety within diverse and complex social interactions.
We run 1840 simulations based on 92 scenarios across seven domains (e.g., healthcare, finance, education)
Our experiments show that state-of-the-art LLMs, both proprietary and open-sourced, exhibit safety risks in over 50% cases.
arXiv Detail & Related papers (2024-09-24T19:47:21Z) - SAFE-SIM: Safety-Critical Closed-Loop Traffic Simulation with Diffusion-Controllable Adversaries [94.84458417662407]
We introduce SAFE-SIM, a controllable closed-loop safety-critical simulation framework.
Our approach yields two distinct advantages: 1) generating realistic long-tail safety-critical scenarios that closely reflect real-world conditions, and 2) providing controllable adversarial behavior for more comprehensive and interactive evaluations.
We validate our framework empirically using the nuScenes and nuPlan datasets across multiple planners, demonstrating improvements in both realism and controllability.
arXiv Detail & Related papers (2023-12-31T04:14:43Z) - DCIR: Dynamic Consistency Intrinsic Reward for Multi-Agent Reinforcement
Learning [84.22561239481901]
We propose a new approach that enables agents to learn whether their behaviors should be consistent with that of other agents.
We evaluate DCIR in multiple environments including Multi-agent Particle, Google Research Football and StarCraft II Micromanagement.
arXiv Detail & Related papers (2023-12-10T06:03:57Z) - Learning Adaptive Safety for Multi-Agent Systems [14.076785738848924]
We show how emergent behavior can be profoundly influenced by the CBF configuration.
We present ASRL, a novel adaptive safe RL framework, to enhance safety and long-term performance.
We evaluate ASRL in a multi-robot system and a competitive multi-agent racing scenario.
arXiv Detail & Related papers (2023-09-19T14:39:39Z) - Risk-aware Safe Control for Decentralized Multi-agent Systems via
Dynamic Responsibility Allocation [36.52509571098292]
We present a risk-aware decentralized control framework that provides guidance on how much responsibility share an individual agent should take to avoid collisions with others.
We propose a novel Control Barrier Function (CBF)-inspired risk measurement to characterize the aggregate risk agents face from potential collisions under motion uncertainty.
We are able to leverage the flexibility of robots with lower risk to improve the motion flexibility for those with higher risk, thus achieving improved collective safety.
arXiv Detail & Related papers (2023-05-22T20:21:49Z) - Scalable Task-Driven Robotic Swarm Control via Collision Avoidance and
Learning Mean-Field Control [23.494528616672024]
We use state-of-the-art mean-field control techniques to convert many-agent swarm control into classical single-agent control of distributions.
Here, we combine collision avoidance and learning of mean-field control into a unified framework for tractably designing intelligent robotic swarm behavior.
arXiv Detail & Related papers (2022-09-15T16:15:04Z) - ROMAX: Certifiably Robust Deep Multiagent Reinforcement Learning via
Convex Relaxation [32.091346776897744]
Cyber-physical attacks can challenge the robustness of multiagent reinforcement learning.
We propose a minimax MARL approach to infer the worst-case policy update of other agents.
arXiv Detail & Related papers (2021-09-14T16:18:35Z) - ERMAS: Becoming Robust to Reward Function Sim-to-Real Gaps in
Multi-Agent Simulations [110.72725220033983]
Epsilon-Robust Multi-Agent Simulation (ERMAS) is a framework for learning AI policies that are robust to such multiagent sim-to-real gaps.
ERMAS learns tax policies that are robust to changes in agent risk aversion, improving social welfare by up to 15% in complextemporal simulations.
In particular, ERMAS learns tax policies that are robust to changes in agent risk aversion, improving social welfare by up to 15% in complextemporal simulations.
arXiv Detail & Related papers (2021-06-10T04:32:20Z) - Risk-Sensitive Sequential Action Control with Multi-Modal Human
Trajectory Forecasting for Safe Crowd-Robot Interaction [55.569050872780224]
We present an online framework for safe crowd-robot interaction based on risk-sensitive optimal control, wherein the risk is modeled by the entropic risk measure.
Our modular approach decouples the crowd-robot interaction into learning-based prediction and model-based control.
A simulation study and a real-world experiment show that the proposed framework can accomplish safe and efficient navigation while avoiding collisions with more than 50 humans in the scene.
arXiv Detail & Related papers (2020-09-12T02:02:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.