Mixed Traffic Control and Coordination from Pixels
- URL: http://arxiv.org/abs/2302.09167v4
- Date: Mon, 5 Feb 2024 18:35:36 GMT
- Title: Mixed Traffic Control and Coordination from Pixels
- Authors: Michael Villarreal, Bibek Poudel, Jia Pan, Weizi Li
- Abstract summary: Previous methods for traffic control have proven futile in alleviating current congestion levels.
This gives rise to mixed traffic control, where robot vehicles regulate human-driven vehicles through reinforcement learning (RL)
In this work, we show robot vehicles using image observations can achieve competitive performance to using precise information on environments.
- Score: 18.37701232116777
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Traffic congestion is a persistent problem in our society. Previous methods
for traffic control have proven futile in alleviating current congestion levels
leading researchers to explore ideas with robot vehicles given the increased
emergence of vehicles with different levels of autonomy on our roads. This
gives rise to mixed traffic control, where robot vehicles regulate human-driven
vehicles through reinforcement learning (RL). However, most existing studies
use precise observations that require domain expertise and hand engineering for
each road network's observation space. Additionally, precise observations use
global information, such as environment outflow, and local information, i.e.,
vehicle positions and velocities. Obtaining this information requires updating
existing road infrastructure with vast sensor environments and communication to
potentially unwilling human drivers. We consider image observations, a modality
that has not been extensively explored for mixed traffic control via RL, as the
alternative: 1) images do not require a complete re-imagination of the
observation space from environment to environment; 2) images are ubiquitous
through satellite imagery, in-car camera systems, and traffic monitoring
systems; and 3) images only require communication to equipment. In this work,
we show robot vehicles using image observations can achieve competitive
performance to using precise information on environments, including ring,
figure eight, intersection, merge, and bottleneck. In certain scenarios, our
approach even outperforms using precision observations, e.g., up to 8% increase
in average vehicle velocity in the merge environment, despite only using local
traffic information as opposed to global traffic information.
Related papers
- Traffic control using intelligent timing of traffic lights with reinforcement learning technique and real-time processing of surveillance camera images [0.0]
The optimal timing of traffic lights is determined and applied according to several parameters.
Deep learning methods were used in vehicle detection using the YOLOv9-C model.
The use of transfer learning along with retraining the model on images of Iranian cars has increased the accuracy of the model.
arXiv Detail & Related papers (2024-05-22T00:04:32Z) - Real-Time Detection and Analysis of Vehicles and Pedestrians using Deep Learning [0.0]
Current traffic monitoring systems confront major difficulty in recognizing small objects and pedestrians effectively in real-time.
Our project focuses on the creation and validation of an advanced deep-learning framework capable of processing complex visual input for precise, real-time recognition of cars and people.
The YOLOv8 Large version proved to be the most effective, especially in pedestrian recognition, with great precision and robustness.
arXiv Detail & Related papers (2024-04-11T18:42:14Z) - A Holistic Framework Towards Vision-based Traffic Signal Control with
Microscopic Simulation [53.39174966020085]
Traffic signal control (TSC) is crucial for reducing traffic congestion that leads to smoother traffic flow, reduced idling time, and mitigated CO2 emissions.
In this study, we explore the computer vision approach for TSC that modulates on-road traffic flows through visual observation.
We introduce a holistic traffic simulation framework called TrafficDojo towards vision-based TSC and its benchmarking.
arXiv Detail & Related papers (2024-03-11T16:42:29Z) - Street-View Image Generation from a Bird's-Eye View Layout [95.36869800896335]
Bird's-Eye View (BEV) Perception has received increasing attention in recent years.
Data-driven simulation for autonomous driving has been a focal point of recent research.
We propose BEVGen, a conditional generative model that synthesizes realistic and spatially consistent surrounding images.
arXiv Detail & Related papers (2023-01-11T18:39:34Z) - Efficient Federated Learning with Spike Neural Networks for Traffic Sign
Recognition [70.306089187104]
We introduce powerful Spike Neural Networks (SNNs) into traffic sign recognition for energy-efficient and fast model training.
Numerical results indicate that the proposed federated SNN outperforms traditional federated convolutional neural networks in terms of accuracy, noise immunity, and energy efficiency as well.
arXiv Detail & Related papers (2022-05-28T03:11:48Z) - Turning Traffic Monitoring Cameras into Intelligent Sensors for Traffic
Density Estimation [9.096163152559054]
This paper proposes a framework for estimating traffic density using uncalibrated traffic monitoring cameras with 4L characteristics.
The proposed framework consists of two major components: camera calibration and vehicle detection.
The results show that the Mean Absolute Error (MAE) in camera calibration is less than 0.2 meters out of 6 meters, and the accuracy of vehicle detection under various conditions is approximately 90%.
arXiv Detail & Related papers (2021-10-29T15:39:06Z) - Traffic-Net: 3D Traffic Monitoring Using a Single Camera [1.1602089225841632]
We provide a practical platform for real-time traffic monitoring using a single CCTV traffic camera.
We adapt a custom YOLOv5 deep neural network model for vehicle/pedestrian detection and an enhanced SORT tracking algorithm.
We also develop a hierarchical traffic modelling solution based on short- and long-term temporal video data stream.
arXiv Detail & Related papers (2021-09-19T16:59:01Z) - Driving-Signal Aware Full-Body Avatars [49.89791440532946]
We present a learning-based method for building driving-signal aware full-body avatars.
Our model is a conditional variational autoencoder that can be animated with incomplete driving signals.
We demonstrate the efficacy of our approach on the challenging problem of full-body animation for virtual telepresence.
arXiv Detail & Related papers (2021-05-21T16:22:38Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z) - Unsupervised Vehicle Counting via Multiple Camera Domain Adaptation [9.730985797769764]
Monitoring vehicle flows in cities is crucial to improve the urban environment and quality of life of citizens.
Current technologies for vehicle counting in images hinge on large quantities of annotated data, preventing their scalability to city-scale as new cameras are added to the system.
We propose and discuss a new methodology to design image-based vehicle density estimators with few labeled data via multiple camera domain adaptations.
arXiv Detail & Related papers (2020-04-20T13:00:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.