Multi-Agent Broad Reinforcement Learning for Intelligent Traffic Light
Control
- URL: http://arxiv.org/abs/2203.04310v1
- Date: Tue, 8 Mar 2022 14:04:09 GMT
- Title: Multi-Agent Broad Reinforcement Learning for Intelligent Traffic Light
Control
- Authors: Ruijie Zhu, Lulu Li, Shuning Wu, Pei Lv, Yafai Li, Mingliang Xu
- Abstract summary: Existing approaches of Multi-Agent System (MAS) are largely based on Multi-Agent Deep Reinforcement Learning (MADRL)
We propose a Multi-Agent Broad Reinforcement Learning (MABRL) framework to explore the function of BLS in MAS.
- Score: 21.87935026688773
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Intelligent Traffic Light Control System (ITLCS) is a typical Multi-Agent
System (MAS), which comprises multiple roads and traffic lights.Constructing a
model of MAS for ITLCS is the basis to alleviate traffic congestion. Existing
approaches of MAS are largely based on Multi-Agent Deep Reinforcement Learning
(MADRL). Although the Deep Neural Network (DNN) of MABRL is effective, the
training time is long, and the parameters are difficult to trace. Recently,
Broad Learning Systems (BLS) provided a selective way for learning in the deep
neural networks by a flat network. Moreover, Broad Reinforcement Learning (BRL)
extends BLS in Single Agent Deep Reinforcement Learning (SADRL) problem with
promising results. However, BRL does not focus on the intricate structures and
interaction of agents. Motivated by the feature of MADRL and the issue of BRL,
we propose a Multi-Agent Broad Reinforcement Learning (MABRL) framework to
explore the function of BLS in MAS. Firstly, unlike most existing MADRL
approaches, which use a series of deep neural networks structures, we model
each agent with broad networks. Then, we introduce a dynamic self-cycling
interaction mechanism to confirm the "3W" information: When to interact, Which
agents need to consider, What information to transmit. Finally, we do the
experiments based on the intelligent traffic light control scenario. We compare
the MABRL approach with six different approaches, and experimental results on
three datasets verify the effectiveness of MABRL.
Related papers
- Expert-Free Online Transfer Learning in Multi-Agent Reinforcement Learning [0.0]
Transfer Learning (TL) aims to reduce the learning complexity for an agent dealing with an unfamiliar task.
It enables the use of external knowledge from other tasks or agents to enhance a learning process.
This is achieved by lowering the amount of new information required by its learning model, resulting in a reduced overall convergence time.
arXiv Detail & Related papers (2025-01-26T11:53:18Z) - MALT: Improving Reasoning with Multi-Agent LLM Training [64.13803241218886]
We present a first step toward "Multi-agent LLM training" (MALT) on reasoning problems.
Our approach employs a sequential multi-agent setup with heterogeneous LLMs assigned specialized roles.
We evaluate our approach across MATH, GSM8k, and CQA, where MALT on Llama 3.1 8B models achieves relative improvements of 14.14%, 7.12%, and 9.40% respectively.
arXiv Detail & Related papers (2024-12-02T19:30:36Z) - Online Multi-modal Root Cause Analysis [61.94987309148539]
Root Cause Analysis (RCA) is essential for pinpointing the root causes of failures in microservice systems.
Existing online RCA methods handle only single-modal data overlooking, complex interactions in multi-modal systems.
We introduce OCEAN, a novel online multi-modal causal structure learning method for root cause localization.
arXiv Detail & Related papers (2024-10-13T21:47:36Z) - An Examination of Offline-Trained Encoders in Vision-Based Deep Reinforcement Learning for Autonomous Driving [0.0]
Research investigates the challenges Deep Reinforcement Learning (DRL) faces in Partially Observable Markov Decision Processes (POMDP)
Our research adopts an offline-trained encoder to leverage large video datasets through self-supervised learning to learn generalizable representations.
We show that the features learned by watching BDD100K driving videos can be directly transferred to achieve lane following and collision avoidance in CARLA simulator.
arXiv Detail & Related papers (2024-09-02T14:16:23Z) - MAIDCRL: Semi-centralized Multi-Agent Influence Dense-CNN Reinforcement
Learning [0.7366405857677227]
We present a semi-centralized Dense Reinforcement Learning algorithm enhanced by agent influence maps (AIMs) for learning effective multi-agent control on StarCraft Multi-Agent Challenge (SMAC) scenarios.
The results show that the CNN-enabled MAIDCRL significantly improved the learning performance and achieved a faster learning rate compared to the existing MAIDRL.
arXiv Detail & Related papers (2024-02-12T18:53:20Z) - M2CURL: Sample-Efficient Multimodal Reinforcement Learning via Self-Supervised Representation Learning for Robotic Manipulation [0.7564784873669823]
We propose Multimodal Contrastive Unsupervised Reinforcement Learning (M2CURL)
Our approach employs a novel multimodal self-supervised learning technique that learns efficient representations and contributes to faster convergence of RL algorithms.
We evaluate M2CURL on the Tactile Gym 2 simulator and we show that it significantly enhances the learning efficiency in different manipulation tasks.
arXiv Detail & Related papers (2024-01-30T14:09:35Z) - MADiff: Offline Multi-agent Learning with Diffusion Models [79.18130544233794]
MADiff is a diffusion-based multi-agent learning framework.
It works as both a decentralized policy and a centralized controller.
Our experiments demonstrate that MADiff outperforms baseline algorithms across various multi-agent learning tasks.
arXiv Detail & Related papers (2023-05-27T02:14:09Z) - Multitask Adaptation by Retrospective Exploration with Learned World
Models [77.34726150561087]
We propose a meta-learned addressing model called RAMa that provides training samples for the MBRL agent taken from task-agnostic storage.
The model is trained to maximize the expected agent's performance by selecting promising trajectories solving prior tasks from the storage.
arXiv Detail & Related papers (2021-10-25T20:02:57Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Single and Multi-Agent Deep Reinforcement Learning for AI-Enabled
Wireless Networks: A Tutorial [29.76086936463468]
This tutorial focuses on the role of Deep Reinforcement Learning (DRL) with an emphasis on deep Multi-Agent Reinforcement Learning (MARL) for AI-enabled 6G networks.
The first part of this paper will present a clear overview of the mathematical frameworks for single-agent RL and MARL.
We provide a selective description of RL algorithms such as Model-Based RL (MBRL) and cooperative MARL.
arXiv Detail & Related papers (2020-11-06T22:12:40Z) - Understanding Self-supervised Learning with Dual Deep Networks [74.92916579635336]
We propose a novel framework to understand contrastive self-supervised learning (SSL) methods that employ dual pairs of deep ReLU networks.
We prove that in each SGD update of SimCLR with various loss functions, the weights at each layer are updated by a emphcovariance operator.
To further study what role the covariance operator plays and which features are learned in such a process, we model data generation and augmentation processes through a emphhierarchical latent tree model (HLTM)
arXiv Detail & Related papers (2020-10-01T17:51:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.