Risk-aware Safe Control for Decentralized Multi-agent Systems via
Dynamic Responsibility Allocation
- URL: http://arxiv.org/abs/2305.13467v1
- Date: Mon, 22 May 2023 20:21:49 GMT
- Title: Risk-aware Safe Control for Decentralized Multi-agent Systems via
Dynamic Responsibility Allocation
- Authors: Yiwei Lyu, Wenhao Luo and John M. Dolan
- Abstract summary: We present a risk-aware decentralized control framework that provides guidance on how much responsibility share an individual agent should take to avoid collisions with others.
We propose a novel Control Barrier Function (CBF)-inspired risk measurement to characterize the aggregate risk agents face from potential collisions under motion uncertainty.
We are able to leverage the flexibility of robots with lower risk to improve the motion flexibility for those with higher risk, thus achieving improved collective safety.
- Score: 36.52509571098292
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Decentralized control schemes are increasingly favored in various domains
that involve multi-agent systems due to the need for computational efficiency
as well as general applicability to large-scale systems. However, in the
absence of an explicit global coordinator, it is hard for distributed agents to
determine how to efficiently interact with others. In this paper, we present a
risk-aware decentralized control framework that provides guidance on how much
relative responsibility share (a percentage) an individual agent should take to
avoid collisions with others while moving efficiently without direct
communications. We propose a novel Control Barrier Function (CBF)-inspired risk
measurement to characterize the aggregate risk agents face from potential
collisions under motion uncertainty. We use this measurement to allocate
responsibility shares among agents dynamically and develop risk-aware
decentralized safe controllers. In this way, we are able to leverage the
flexibility of robots with lower risk to improve the motion flexibility for
those with higher risk, thus achieving improved collective safety. We
demonstrate the validity and efficiency of our proposed approach through two
examples: ramp merging in autonomous driving and a multi-agent
position-swapping game.
Related papers
- RiskQ: Risk-sensitive Multi-Agent Reinforcement Learning Value Factorization [49.26510528455664]
We introduce the Risk-sensitive Individual-Global-Max (RIGM) principle as a generalization of the Individual-Global-Max (IGM) and Distributional IGM (DIGM) principles.
We show that RiskQ can obtain promising performance through extensive experiments.
arXiv Detail & Related papers (2023-11-03T07:18:36Z) - DePAint: A Decentralized Safe Multi-Agent Reinforcement Learning Algorithm considering Peak and Average Constraints [1.1549572298362787]
We propose a momentum-based decentralized gradient policy method, DePAint, to solve the problem.
This is the first privacy-preserving fully decentralized multi-agent reinforcement learning algorithm that considers both peak and average constraints.
arXiv Detail & Related papers (2023-10-22T16:36:03Z) - Learning Risk-Aware Quadrupedal Locomotion using Distributional Reinforcement Learning [12.156082576280955]
Deployment in hazardous environments requires robots to understand the risks associated with their actions and movements to prevent accidents.
We propose a risk sensitive locomotion training method employing distributional reinforcement learning to consider safety explicitly.
We show emergent risk sensitive locomotion behavior in simulation and on the quadrupedal robot ANYmal.
arXiv Detail & Related papers (2023-09-25T16:05:32Z) - Learning Adaptive Safety for Multi-Agent Systems [14.076785738848924]
We show how emergent behavior can be profoundly influenced by the CBF configuration.
We present ASRL, a novel adaptive safe RL framework, to enhance safety and long-term performance.
We evaluate ASRL in a multi-robot system and a competitive multi-agent racing scenario.
arXiv Detail & Related papers (2023-09-19T14:39:39Z) - Learned Risk Metric Maps for Kinodynamic Systems [54.49871675894546]
We present Learned Risk Metric Maps for real-time estimation of coherent risk metrics of high dimensional dynamical systems.
LRMM models are simple to design and train, requiring only procedural generation of obstacle sets, state and control sampling, and supervised training of a function approximator.
arXiv Detail & Related papers (2023-02-28T17:51:43Z) - Safety Correction from Baseline: Towards the Risk-aware Policy in
Robotics via Dual-agent Reinforcement Learning [64.11013095004786]
We propose a dual-agent safe reinforcement learning strategy consisting of a baseline and a safe agent.
Such a decoupled framework enables high flexibility, data efficiency and risk-awareness for RL-based control.
The proposed method outperforms the state-of-the-art safe RL algorithms on difficult robot locomotion and manipulation tasks.
arXiv Detail & Related papers (2022-12-14T03:11:25Z) - Learning Safe Multi-Agent Control with Decentralized Neural Barrier
Certificates [19.261536710315028]
We study the multi-agent safe control problem where agents should avoid collisions to static obstacles and collisions with each other while reaching their goals.
Our core idea is to learn the multi-agent control policy jointly with learning the control barrier functions as safety certificates.
We propose a novel joint-learning framework that can be implemented in a decentralized fashion, with generalization guarantees for certain function classes.
arXiv Detail & Related papers (2021-01-14T03:17:17Z) - Lyapunov-Based Reinforcement Learning for Decentralized Multi-Agent
Control [3.3788926259119645]
In decentralized multi-agent control, systems are complex with unknown or highly uncertain dynamics.
Deep reinforcement learning (DRL) is promising to learn the controller/policy from data without the knowing system dynamics.
Existing multi-agent reinforcement learning (MARL) algorithms cannot ensure the closed-loop stability of a multi-agent system.
We propose a new MARL algorithm for decentralized multi-agent control with a stability guarantee.
arXiv Detail & Related papers (2020-09-20T06:11:42Z) - Risk-Sensitive Sequential Action Control with Multi-Modal Human
Trajectory Forecasting for Safe Crowd-Robot Interaction [55.569050872780224]
We present an online framework for safe crowd-robot interaction based on risk-sensitive optimal control, wherein the risk is modeled by the entropic risk measure.
Our modular approach decouples the crowd-robot interaction into learning-based prediction and model-based control.
A simulation study and a real-world experiment show that the proposed framework can accomplish safe and efficient navigation while avoiding collisions with more than 50 humans in the scene.
arXiv Detail & Related papers (2020-09-12T02:02:52Z) - F2A2: Flexible Fully-decentralized Approximate Actor-critic for
Cooperative Multi-agent Reinforcement Learning [110.35516334788687]
Decentralized multi-agent reinforcement learning algorithms are sometimes unpractical in complicated applications.
We propose a flexible fully decentralized actor-critic MARL framework, which can handle large-scale general cooperative multi-agent setting.
Our framework can achieve scalability and stability for large-scale environment and reduce information transmission.
arXiv Detail & Related papers (2020-04-17T14:56:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.