Decentralized Event-Triggered Online Learning for Safe Consensus of
Multi-Agent Systems with Gaussian Process Regression
- URL: http://arxiv.org/abs/2402.03174v1
- Date: Mon, 5 Feb 2024 16:41:17 GMT
- Title: Decentralized Event-Triggered Online Learning for Safe Consensus of
Multi-Agent Systems with Gaussian Process Regression
- Authors: Xiaobing Dai, Zewen Yang, Mengtian Xu, Fangzhou Liu, Georges Hattab
and Sandra Hirche
- Abstract summary: This paper presents a novel learning-based distributed control law, augmented by an auxiliary dynamics.
For continuous enhancement in predictive performance, a data-efficient online learning strategy with a decentralized event-triggered mechanism is proposed.
To demonstrate the efficacy of the proposed learning-based controller, a comparative analysis is conducted, contrasting it with both conventional distributed control laws and offline learning methodologies.
- Score: 3.405252606286664
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Consensus control in multi-agent systems has received significant attention
and practical implementation across various domains. However, managing
consensus control under unknown dynamics remains a significant challenge for
control design due to system uncertainties and environmental disturbances. This
paper presents a novel learning-based distributed control law, augmented by an
auxiliary dynamics. Gaussian processes are harnessed to compensate for the
unknown components of the multi-agent system. For continuous enhancement in
predictive performance of Gaussian process model, a data-efficient online
learning strategy with a decentralized event-triggered mechanism is proposed.
Furthermore, the control performance of the proposed approach is ensured via
the Lyapunov theory, based on a probabilistic guarantee for prediction error
bounds. To demonstrate the efficacy of the proposed learning-based controller,
a comparative analysis is conducted, contrasting it with both conventional
distributed control laws and offline learning methodologies.
Related papers
- Cooperative Learning with Gaussian Processes for Euler-Lagrange Systems
Tracking Control under Switching Topologies [9.838373797093245]
This work presents an innovative learning-based approach to tackle the tracking control problem of Euler-Lagrange multi-agent systems.
A standout feature is its exceptional efficiency in deriving the aggregation weights achieved.
Simulation experiments validate the protocol's efficacy in effectively managing complex scenarios.
arXiv Detail & Related papers (2024-02-05T14:33:52Z) - Distributionally Robust Model-based Reinforcement Learning with Large
State Spaces [55.14361269378122]
Three major challenges in reinforcement learning are the complex dynamical systems with large state spaces, the costly data acquisition processes, and the deviation of real-world dynamics from the training environment deployment.
We study distributionally robust Markov decision processes with continuous state spaces under the widely used Kullback-Leibler, chi-square, and total variation uncertainty sets.
We propose a model-based approach that utilizes Gaussian Processes and the maximum variance reduction algorithm to efficiently learn multi-output nominal transition dynamics.
arXiv Detail & Related papers (2023-09-05T13:42:11Z) - Distributionally Robust Statistical Verification with Imprecise Neural
Networks [4.094049541486327]
A particularly challenging problem in AI safety is providing guarantees on the behavior of high-dimensional autonomous systems.
This paper proposes a novel approach based on a combination of active learning, uncertainty quantification, and neural network verification.
arXiv Detail & Related papers (2023-08-28T18:06:24Z) - Learning-Based Optimal Control with Performance Guarantees for Unknown Systems with Latent States [4.4820711784498]
This paper proposes a novel method for the computation of an optimal input trajectory for unknown nonlinear systems with latent states.
The effectiveness of the proposed method is demonstrated in a numerical simulation.
arXiv Detail & Related papers (2023-03-31T11:06:09Z) - Model Predictive Control with Gaussian-Process-Supported Dynamical
Constraints for Autonomous Vehicles [82.65261980827594]
We propose a model predictive control approach for autonomous vehicles that exploits learned Gaussian processes for predicting human driving behavior.
A multi-mode predictive control approach considers the possible intentions of the human drivers.
arXiv Detail & Related papers (2023-03-08T17:14:57Z) - Safe Multi-agent Learning via Trapping Regions [89.24858306636816]
We apply the concept of trapping regions, known from qualitative theory of dynamical systems, to create safety sets in the joint strategy space for decentralized learning.
We propose a binary partitioning algorithm for verification that candidate sets form trapping regions in systems with known learning dynamics, and a sampling algorithm for scenarios where learning dynamics are not known.
arXiv Detail & Related papers (2023-02-27T14:47:52Z) - Sparsity in Partially Controllable Linear Systems [56.142264865866636]
We study partially controllable linear dynamical systems specified by an underlying sparsity pattern.
Our results characterize those state variables which are irrelevant for optimal control.
arXiv Detail & Related papers (2021-10-12T16:41:47Z) - Deep Learning Explicit Differentiable Predictive Control Laws for
Buildings [1.4121977037543585]
We present a differentiable predictive control (DPC) methodology for learning constrained control laws for unknown nonlinear systems.
DPC poses an approximate solution to multiparametric programming problems emerging from explicit nonlinear model predictive control (MPC)
arXiv Detail & Related papers (2021-07-25T16:47:57Z) - Probabilistic robust linear quadratic regulators with Gaussian processes [73.0364959221845]
Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design.
We present a novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin.
arXiv Detail & Related papers (2021-05-17T08:36:18Z) - Lyapunov-Based Reinforcement Learning for Decentralized Multi-Agent
Control [3.3788926259119645]
In decentralized multi-agent control, systems are complex with unknown or highly uncertain dynamics.
Deep reinforcement learning (DRL) is promising to learn the controller/policy from data without the knowing system dynamics.
Existing multi-agent reinforcement learning (MARL) algorithms cannot ensure the closed-loop stability of a multi-agent system.
We propose a new MARL algorithm for decentralized multi-agent control with a stability guarantee.
arXiv Detail & Related papers (2020-09-20T06:11:42Z) - Anticipating the Long-Term Effect of Online Learning in Control [75.6527644813815]
AntLer is a design algorithm for learning-based control laws that anticipates learning.
We show that AntLer approximates an optimal solution arbitrarily accurately with probability one.
arXiv Detail & Related papers (2020-07-24T07:00:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.