A Policy Iteration Approach for Flock Motion Control
- URL: http://arxiv.org/abs/2303.10035v1
- Date: Fri, 17 Mar 2023 15:04:57 GMT
- Title: A Policy Iteration Approach for Flock Motion Control
- Authors: Shuzheng Qu, Mohammed Abouheaf, Wail Gueaieb and Davide Spinello
- Abstract summary: The overall control process guides the agents while monitoring the flock-cohesiveness and localization.
An online model-free policy iteration mechanism is developed here to guide a flock of agents to follow an independent command generator.
The simulation results of the policy iteration mechanism revealed fast learning and convergence behaviors with less computational effort.
- Score: 5.419608513284392
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The flocking motion control is concerned with managing the possible conflicts
between local and team objectives of multi-agent systems. The overall control
process guides the agents while monitoring the flock-cohesiveness and
localization. The underlying mechanisms may degrade due to overlooking the
unmodeled uncertainties associated with the flock dynamics and formation. On
another side, the efficiencies of the various control designs rely on how
quickly they can adapt to different dynamic situations in real-time. An online
model-free policy iteration mechanism is developed here to guide a flock of
agents to follow an independent command generator over a time-varying graph
topology. The strength of connectivity between any two agents or the graph edge
weight is decided using a position adjacency dependent function. An online
recursive least squares approach is adopted to tune the guidance strategies
without knowing the dynamics of the agents or those of the command generator.
It is compared with another reinforcement learning approach from the literature
which is based on a value iteration technique. The simulation results of the
policy iteration mechanism revealed fast learning and convergence behaviors
with less computational effort.
Related papers
- Decentralized Learning Strategies for Estimation Error Minimization with Graph Neural Networks [94.2860766709971]
We address the challenge of sampling and remote estimation for autoregressive Markovian processes in a wireless network with statistically-identical agents.
Our goal is to minimize time-average estimation error and/or age of information with decentralized scalable sampling and transmission policies.
arXiv Detail & Related papers (2024-04-04T06:24:11Z) - SAFE-SIM: Safety-Critical Closed-Loop Traffic Simulation with Diffusion-Controllable Adversaries [94.84458417662407]
We introduce SAFE-SIM, a controllable closed-loop safety-critical simulation framework.
Our approach yields two distinct advantages: 1) generating realistic long-tail safety-critical scenarios that closely reflect real-world conditions, and 2) providing controllable adversarial behavior for more comprehensive and interactive evaluations.
We validate our framework empirically using the nuScenes and nuPlan datasets across multiple planners, demonstrating improvements in both realism and controllability.
arXiv Detail & Related papers (2023-12-31T04:14:43Z) - Interactive Autonomous Navigation with Internal State Inference and
Interactivity Estimation [58.21683603243387]
We propose three auxiliary tasks with relational-temporal reasoning and integrate them into the standard Deep Learning framework.
These auxiliary tasks provide additional supervision signals to infer the behavior patterns other interactive agents.
Our approach achieves robust and state-of-the-art performance in terms of standard evaluation metrics.
arXiv Detail & Related papers (2023-11-27T18:57:42Z) - Decentralized Adversarial Training over Graphs [55.28669771020857]
The vulnerability of machine learning models to adversarial attacks has been attracting considerable attention in recent years.
This work studies adversarial training over graphs, where individual agents are subjected to varied strength perturbation space.
arXiv Detail & Related papers (2023-03-23T15:05:16Z) - An Adaptive Fuzzy Reinforcement Learning Cooperative Approach for the
Autonomous Control of Flock Systems [4.961066282705832]
This work introduces an adaptive distributed robustness technique for the autonomous control of flock systems.
Its relatively flexible structure is based on online fuzzy reinforcement learning schemes which simultaneously target a number of objectives.
In addition to its resilience in the face of dynamic disturbances, the algorithm does not require more than the agent position as a feedback signal.
arXiv Detail & Related papers (2023-03-17T13:07:35Z) - Isolating and Leveraging Controllable and Noncontrollable Visual
Dynamics in World Models [65.97707691164558]
We present Iso-Dream, which improves the Dream-to-Control framework in two aspects.
First, by optimizing inverse dynamics, we encourage world model to learn controllable and noncontrollable sources.
Second, we optimize the behavior of the agent on the decoupled latent imaginations of the world model.
arXiv Detail & Related papers (2022-05-27T08:07:39Z) - TASAC: a twin-actor reinforcement learning framework with stochastic
policy for batch process control [1.101002667958165]
Reinforcement Learning (RL) wherein an agent learns the policy by directly interacting with the environment, offers a potential alternative in this context.
RL frameworks with actor-critic architecture have recently become popular for controlling systems where state and action spaces are continuous.
It has been shown that an ensemble of actor and critic networks further helps the agent learn better policies due to the enhanced exploration due to simultaneous policy learning.
arXiv Detail & Related papers (2022-04-22T13:00:51Z) - Relative Distributed Formation and Obstacle Avoidance with Multi-agent
Reinforcement Learning [20.401609420707867]
We propose a distributed formation and obstacle avoidance method based on multi-agent reinforcement learning (MARL)
Our method achieves better performance regarding formation error, formation convergence rate and on-par success rate of obstacle avoidance compared with baselines.
arXiv Detail & Related papers (2021-11-14T13:02:45Z) - Trajectory Tracking of Underactuated Sea Vessels With Uncertain
Dynamics: An Integral Reinforcement Learning Approach [2.064612766965483]
An online machine learning mechanism based on integral reinforcement learning is proposed to find a solution for a class of nonlinear tracking problems.
The solution is implemented using an online value iteration process which is realized by employing means of the adaptive critics and gradient descent approaches.
arXiv Detail & Related papers (2021-04-01T01:41:49Z) - CARL: Controllable Agent with Reinforcement Learning for Quadruped
Locomotion [0.0]
We present CARL, a quadruped agent that can be controlled with high-level directives and react naturally to dynamic environments.
We use Generative Adrial Networks to adapt high-level controls, such as speed and heading, to action distributions that correspond to the original animations.
Further fine-tuning through the deep reinforcement learning enables the agent to recover from unseen external perturbations while producing smooth transitions.
arXiv Detail & Related papers (2020-05-07T07:18:57Z) - Multi-Agent Interactions Modeling with Correlated Policies [53.38338964628494]
In this paper, we cast the multi-agent interactions modeling problem into a multi-agent imitation learning framework.
We develop a Decentralized Adrial Imitation Learning algorithm with Correlated policies (CoDAIL)
Various experiments demonstrate that CoDAIL can better regenerate complex interactions close to the demonstrators.
arXiv Detail & Related papers (2020-01-04T17:31:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.