Towards Co-operative Congestion Mitigation
- URL: http://arxiv.org/abs/2302.09140v1
- Date: Fri, 17 Feb 2023 21:08:55 GMT
- Title: Towards Co-operative Congestion Mitigation
- Authors: Aamir Hasan, Neeloy Chakraborty, Cathy Wu, and Katherine
Driggs-Campbell
- Abstract summary: Piecewise constant driving policies have shown promise in helping mitigate traffic congestion in simulation environments.
We propose to evaluate these policies through the use of a shared control framework in a collaborative experiment with the human driver.
- Score: 5.358968214341347
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The effects of traffic congestion are widespread and are an impedance to
everyday life. Piecewise constant driving policies have shown promise in
helping mitigate traffic congestion in simulation environments. However, no
works currently test these policies in situations involving real human users.
Thus, we propose to evaluate these policies through the use of a shared control
framework in a collaborative experiment with the human driver and the driving
policy aiming to co-operatively mitigate congestion. We intend to use the CARLA
simulator alongside the Flow framework to conduct user studies to evaluate the
affect of piecewise constant driving policies. As such, we present our
in-progress work in building our framework and discuss our proposed plan on
evaluating this framework through a human-in-the-loop simulation user study.
Related papers
- Towards Interactive and Learnable Cooperative Driving Automation: a Large Language Model-Driven Decision-Making Framework [79.088116316919]
Connected Autonomous Vehicles (CAVs) have begun to open road testing around the world, but their safety and efficiency performance in complex scenarios is still not satisfactory.
This paper proposes CoDrivingLLM, an interactive and learnable LLM-driven cooperative driving framework.
arXiv Detail & Related papers (2024-09-19T14:36:00Z) - Cooperative Advisory Residual Policies for Congestion Mitigation [11.33450610735004]
We develop a class of learned residual policies that can be used in cooperative advisory systems.
Our policies advise drivers to behave in ways that mitigate traffic congestion while accounting for diverse driver behaviors.
Our approaches successfully mitigate congestion while adapting to different driver behaviors.
arXiv Detail & Related papers (2024-06-30T01:10:13Z) - Evaluating Real-World Robot Manipulation Policies in Simulation [91.55267186958892]
Control and visual disparities between real and simulated environments are key challenges for reliable simulated evaluation.
We propose approaches for mitigating these gaps without needing to craft full-fidelity digital twins of real-world environments.
We create SIMPLER, a collection of simulated environments for manipulation policy evaluation on common real robot setups.
arXiv Detail & Related papers (2024-05-09T17:30:16Z) - Interaction-Aware Decision-Making for Autonomous Vehicles in Forced
Merging Scenario Leveraging Social Psychology Factors [7.812717451846781]
We consider a behavioral model that incorporates both social behaviors and personal objectives of the interacting drivers.
We develop a receding-horizon control-based decision-making strategy that estimates online the other drivers' intentions.
arXiv Detail & Related papers (2023-09-25T19:49:14Z) - PeRP: Personalized Residual Policies For Congestion Mitigation Through
Co-operative Advisory Systems [12.010221998198423]
Piecewise Constant (PC) Policies address issues by structurally modeling the likeness of human driving to reduce traffic congestion.
We develop a co-operative advisory system based on PC policies with a novel driver trait conditioned Personalized Residual Policy, PeRP.
We show that our approach successfully mitigates congestion while adapting to different driver behaviors, with 4 to 22% improvement in average speed over baselines.
arXiv Detail & Related papers (2023-08-01T22:25:40Z) - Robust Driving Policy Learning with Guided Meta Reinforcement Learning [49.860391298275616]
We introduce an efficient method to train diverse driving policies for social vehicles as a single meta-policy.
By randomizing the interaction-based reward functions of social vehicles, we can generate diverse objectives and efficiently train the meta-policy.
We propose a training strategy to enhance the robustness of the ego vehicle's driving policy using the environment where social vehicles are controlled by the learned meta-policy.
arXiv Detail & Related papers (2023-07-19T17:42:36Z) - Studying the Impact of Semi-Cooperative Drivers on Overall Highway Flow [76.38515853201116]
Semi-cooperative behaviors are intrinsic properties of human drivers and should be considered for autonomous driving.
New autonomous planners can consider the social value orientation (SVO) of human drivers to generate socially-compliant trajectories.
We present study of implicit semi-cooperative driving where agents deploy a game-theoretic version of iterative best response.
arXiv Detail & Related papers (2023-04-23T16:01:36Z) - Learning Interactive Driving Policies via Data-driven Simulation [125.97811179463542]
Data-driven simulators promise high data-efficiency for driving policy learning.
Small underlying datasets often lack interesting and challenging edge cases for learning interactive driving.
We propose a simulation method that uses in-painted ado vehicles for learning robust driving policies.
arXiv Detail & Related papers (2021-11-23T20:14:02Z) - Learning Interaction-aware Guidance Policies for Motion Planning in
Dense Traffic Scenarios [8.484564880157148]
This paper presents a novel framework for interaction-aware motion planning in dense traffic scenarios.
We propose to learn, via deep Reinforcement Learning (RL), an interaction-aware policy providing global guidance about the cooperativeness of other vehicles.
The learned policy can reason and guide the local optimization-based planner with interactive behavior to pro-actively merge in dense traffic while remaining safe in case the other vehicles do not yield.
arXiv Detail & Related papers (2021-07-09T16:43:12Z) - Real-world Ride-hailing Vehicle Repositioning using Deep Reinforcement
Learning [52.2663102239029]
We present a new practical framework based on deep reinforcement learning and decision-time planning for real-world vehicle on idle-hailing platforms.
Our approach learns ride-based state-value function using a batch training algorithm with deep value.
We benchmark our algorithm with baselines in a ride-hailing simulation environment to demonstrate its superiority in improving income efficiency.
arXiv Detail & Related papers (2021-03-08T05:34:05Z) - CARLA Real Traffic Scenarios -- novel training ground and benchmark for
autonomous driving [8.287331387095545]
This work introduces interactive traffic scenarios in the CARLA simulator, which are based on real-world traffic.
We concentrate on tactical tasks lasting several seconds, which are especially challenging for current control methods.
The CARLA Real Traffic Scenarios (CRTS) is intended to be a training and testing ground for autonomous driving systems.
arXiv Detail & Related papers (2020-12-16T13:20:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.