Temporal Transfer Learning for Traffic Optimization with Coarse-grained
Advisory Autonomy
- URL: http://arxiv.org/abs/2312.09436v1
- Date: Mon, 27 Nov 2023 21:18:06 GMT
- Title: Temporal Transfer Learning for Traffic Optimization with Coarse-grained
Advisory Autonomy
- Authors: Jung-Hoon Cho, Sirui Li, Jeongyun Kim, Cathy Wu
- Abstract summary: This paper considers advisory autonomy, in which real-time driving advisories are issued to drivers.
We introduce Temporal Transfer Learning (TTL) algorithms to select source tasks, systematically leveraging the temporal structure to solve the full range of tasks.
- Score: 5.254384872541785
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The recent development of connected and automated vehicle (CAV) technologies
has spurred investigations to optimize dense urban traffic. This paper
considers advisory autonomy, in which real-time driving advisories are issued
to drivers, thus blending the CAV and the human driver. Due to the complexity
of traffic systems, recent studies of coordinating CAVs have resorted to
leveraging deep reinforcement learning (RL). Advisory autonomy is formalized as
zero-order holds, and we consider a range of hold duration from 0.1 to 40
seconds. However, despite the similarity of the higher frequency tasks on CAVs,
a direct application of deep RL fails to be generalized to advisory autonomy
tasks. We introduce Temporal Transfer Learning (TTL) algorithms to select
source tasks, systematically leveraging the temporal structure to solve the
full range of tasks. TTL selects the most suitable source tasks to maximize the
performance of the range of tasks. We validate our algorithms on diverse
mixed-traffic scenarios, demonstrating that TTL more reliably solves the tasks
than baselines. This paper underscores the potential of coarse-grained advisory
autonomy with TTL in traffic flow optimization.
Related papers
- DNN Partitioning, Task Offloading, and Resource Allocation in Dynamic Vehicular Networks: A Lyapunov-Guided Diffusion-Based Reinforcement Learning Approach [49.56404236394601]
We formulate the problem of joint DNN partitioning, task offloading, and resource allocation in Vehicular Edge Computing.
Our objective is to minimize the DNN-based task completion time while guaranteeing the system stability over time.
We propose a Multi-Agent Diffusion-based Deep Reinforcement Learning (MAD2RL) algorithm, incorporating the innovative use of diffusion models.
arXiv Detail & Related papers (2024-06-11T06:31:03Z) - Generalizing Cooperative Eco-driving via Multi-residual Task Learning [6.864745785996583]
Multi-residual Task Learning (MRTL) is a generic learning framework based on multi-task learning.
MRTL decomposes control into nominal components that are effectively solved by conventional control methods and residual terms.
We employ MRTL for fleet-level emission reduction in mixed traffic using autonomous vehicles as a means of system control.
arXiv Detail & Related papers (2024-03-07T05:25:34Z) - Multi-agent Path Finding for Cooperative Autonomous Driving [8.8305853192334]
We devise an optimal and complete algorithm, Order-based Search with Kinematics Arrival Time Scheduling (OBS-KATS), which significantly outperforms existing algorithms.
Our work is directly applicable to many similarly scaled traffic and multi-robot scenarios with directed lanes.
arXiv Detail & Related papers (2024-02-01T04:39:15Z) - Rethinking Closed-loop Training for Autonomous Driving [82.61418945804544]
We present the first empirical study which analyzes the effects of different training benchmark designs on the success of learning agents.
We propose trajectory value learning (TRAVL), an RL-based driving agent that performs planning with multistep look-ahead.
Our experiments show that TRAVL can learn much faster and produce safer maneuvers compared to all the baselines.
arXiv Detail & Related papers (2023-06-27T17:58:39Z) - MARLIN: Soft Actor-Critic based Reinforcement Learning for Congestion
Control in Real Networks [63.24965775030673]
We propose a novel Reinforcement Learning (RL) approach to design generic Congestion Control (CC) algorithms.
Our solution, MARLIN, uses the Soft Actor-Critic algorithm to maximize both entropy and return.
We trained MARLIN on a real network with varying background traffic patterns to overcome the sim-to-real mismatch.
arXiv Detail & Related papers (2023-02-02T18:27:20Z) - HARL: Hierarchical Adaptive Reinforcement Learning Based Auto Scheduler
for Neural Networks [51.71682428015139]
We propose HARL, a reinforcement learning-based auto-scheduler for efficient tensor program exploration.
HarL improves the tensor operator performance by 22% and the search speed by 4.3x compared to the state-of-the-art auto-scheduler.
Inference performance and search speed are also significantly improved on end-to-end neural networks.
arXiv Detail & Related papers (2022-11-21T04:15:27Z) - Teal: Learning-Accelerated Optimization of WAN Traffic Engineering [68.7863363109948]
We present Teal, a learning-based TE algorithm that leverages the parallel processing power of GPUs to accelerate TE control.
To reduce the problem scale and make learning tractable, Teal employs a multi-agent reinforcement learning (RL) algorithm to independently allocate each traffic demand.
Compared with other TE acceleration schemes, Teal satisfies 6--32% more traffic demand and yields 197--625x speedups.
arXiv Detail & Related papers (2022-10-25T04:46:30Z) - Semantic-Aware Collaborative Deep Reinforcement Learning Over Wireless
Cellular Networks [82.02891936174221]
Collaborative deep reinforcement learning (CDRL) algorithms in which multiple agents can coordinate over a wireless network is a promising approach.
In this paper, a novel semantic-aware CDRL method is proposed to enable a group of untrained agents with semantically-linked DRL tasks to collaborate efficiently across a resource-constrained wireless cellular network.
arXiv Detail & Related papers (2021-11-23T18:24:47Z) - Integrated Decision and Control: Towards Interpretable and Efficient
Driving Intelligence [13.589285628074542]
We present an interpretable and efficient decision and control framework for automated vehicles.
It decomposes the driving task into multi-path planning and optimal tracking that are structured hierarchically.
Results show that our method has better online computing efficiency and driving performance including traffic efficiency and safety.
arXiv Detail & Related papers (2021-03-18T14:43:31Z) - Leveraging the Capabilities of Connected and Autonomous Vehicles and
Multi-Agent Reinforcement Learning to Mitigate Highway Bottleneck Congestion [2.0010674945048468]
We present an RL-based multi-agent CAV control model to operate in mixed traffic.
The results suggest that even at CAV percent share of corridor traffic as low as 10%, CAVs can significantly mitigate bottlenecks in highway traffic.
arXiv Detail & Related papers (2020-10-12T03:52:10Z) - Multi-Vehicle Routing Problems with Soft Time Windows: A Multi-Agent
Reinforcement Learning Approach [9.717648122961483]
Multi-vehicle routing problem with soft time windows (MVRPSTW) is an indispensable constituent in urban logistics systems.
Traditional methods incur the dilemma between computational efficiency and solution quality.
We propose a novel reinforcement learning algorithm called the Multi-Agent Attention Model that can solve routing problem instantly benefit from lengthy offline training.
arXiv Detail & Related papers (2020-02-13T14:26:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.