PeRP: Personalized Residual Policies For Congestion Mitigation Through
Co-operative Advisory Systems
- URL: http://arxiv.org/abs/2308.00864v2
- Date: Tue, 15 Aug 2023 22:31:40 GMT
- Title: PeRP: Personalized Residual Policies For Congestion Mitigation Through
Co-operative Advisory Systems
- Authors: Aamir Hasan, Neeloy Chakraborty, Haonan Chen, Jung-Hoon Cho, Cathy Wu,
Katherine Driggs-Campbell
- Abstract summary: Piecewise Constant (PC) Policies address issues by structurally modeling the likeness of human driving to reduce traffic congestion.
We develop a co-operative advisory system based on PC policies with a novel driver trait conditioned Personalized Residual Policy, PeRP.
We show that our approach successfully mitigates congestion while adapting to different driver behaviors, with 4 to 22% improvement in average speed over baselines.
- Score: 12.010221998198423
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Intelligent driving systems can be used to mitigate congestion through simple
actions, thus improving many socioeconomic factors such as commute time and gas
costs. However, these systems assume precise control over autonomous vehicle
fleets, and are hence limited in practice as they fail to account for
uncertainty in human behavior. Piecewise Constant (PC) Policies address these
issues by structurally modeling the likeness of human driving to reduce traffic
congestion in dense scenarios to provide action advice to be followed by human
drivers. However, PC policies assume that all drivers behave similarly. To this
end, we develop a co-operative advisory system based on PC policies with a
novel driver trait conditioned Personalized Residual Policy, PeRP. PeRP advises
drivers to behave in ways that mitigate traffic congestion. We first infer the
driver's intrinsic traits on how they follow instructions in an unsupervised
manner with a variational autoencoder. Then, a policy conditioned on the
inferred trait adapts the action of the PC policy to provide the driver with a
personalized recommendation. Our system is trained in simulation with novel
driver modeling of instruction adherence. We show that our approach
successfully mitigates congestion while adapting to different driver behaviors,
with 4 to 22% improvement in average speed over baselines.
Related papers
- GPT-Augmented Reinforcement Learning with Intelligent Control for Vehicle Dispatching [82.19172267487998]
GARLIC: a framework of GPT-Augmented Reinforcement Learning with Intelligent Control for vehicle dispatching.
This paper introduces GARLIC: a framework of GPT-Augmented Reinforcement Learning with Intelligent Control for vehicle dispatching.
arXiv Detail & Related papers (2024-08-19T08:23:38Z) - Cooperative Advisory Residual Policies for Congestion Mitigation [11.33450610735004]
We develop a class of learned residual policies that can be used in cooperative advisory systems.
Our policies advise drivers to behave in ways that mitigate traffic congestion while accounting for diverse driver behaviors.
Our approaches successfully mitigate congestion while adapting to different driver behaviors.
arXiv Detail & Related papers (2024-06-30T01:10:13Z) - Robust Driving Policy Learning with Guided Meta Reinforcement Learning [49.860391298275616]
We introduce an efficient method to train diverse driving policies for social vehicles as a single meta-policy.
By randomizing the interaction-based reward functions of social vehicles, we can generate diverse objectives and efficiently train the meta-policy.
We propose a training strategy to enhance the robustness of the ego vehicle's driving policy using the environment where social vehicles are controlled by the learned meta-policy.
arXiv Detail & Related papers (2023-07-19T17:42:36Z) - Studying the Impact of Semi-Cooperative Drivers on Overall Highway Flow [76.38515853201116]
Semi-cooperative behaviors are intrinsic properties of human drivers and should be considered for autonomous driving.
New autonomous planners can consider the social value orientation (SVO) of human drivers to generate socially-compliant trajectories.
We present study of implicit semi-cooperative driving where agents deploy a game-theoretic version of iterative best response.
arXiv Detail & Related papers (2023-04-23T16:01:36Z) - Towards Co-operative Congestion Mitigation [5.358968214341347]
Piecewise constant driving policies have shown promise in helping mitigate traffic congestion in simulation environments.
We propose to evaluate these policies through the use of a shared control framework in a collaborative experiment with the human driver.
arXiv Detail & Related papers (2023-02-17T21:08:55Z) - Residual Policy Learning for Powertrain Control [2.064612766965483]
This paper outlines an active driver assistance approach that uses a residual policy learning (RPL) agent to provide residual actions to default power train controllers.
By implementing on a simulated commercial vehicle in various car-following scenarios, we find that the RPL agent quickly learns significantly improved policies compared to a baseline source policy.
arXiv Detail & Related papers (2022-12-15T04:22:21Z) - Learning Latent Traits for Simulated Cooperative Driving Tasks [10.009803620912777]
We build a framework capable of capturing a compact latent representation of the human in terms of their behavior and preferences.
We then build a lightweight simulation environment, HMIway-env, for modelling one form of distracted driving behavior.
We finally use this environment to quantify both the ability to discriminate drivers and the effectiveness of intervention policies.
arXiv Detail & Related papers (2022-07-20T02:27:18Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - Equilibrium Inverse Reinforcement Learning for Ride-hailing Vehicle
Network [1.599072005190786]
We formulate the problem of passenger-vehicle matching in a sparsely connected graph.
We propose an algorithm to derive an equilibrium policy in a multi-agent environment.
arXiv Detail & Related papers (2021-02-13T03:18:44Z) - Learning from Simulation, Racing in Reality [126.56346065780895]
We present a reinforcement learning-based solution to autonomously race on a miniature race car platform.
We show that a policy that is trained purely in simulation can be successfully transferred to the real robotic setup.
arXiv Detail & Related papers (2020-11-26T14:58:49Z) - Emergent Road Rules In Multi-Agent Driving Environments [84.82583370858391]
We analyze what ingredients in driving environments cause the emergence of road rules.
We find that two crucial factors are noisy perception and agents' spatial density.
Our results add empirical support for the social road rules that countries worldwide have agreed on for safe, efficient driving.
arXiv Detail & Related papers (2020-11-21T09:43:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.