V2VNet: Vehicle-to-Vehicle Communication for Joint Perception and
Prediction
- URL: http://arxiv.org/abs/2008.07519v1
- Date: Mon, 17 Aug 2020 17:58:26 GMT
- Title: V2VNet: Vehicle-to-Vehicle Communication for Joint Perception and
Prediction
- Authors: Tsun-Hsuan Wang, Sivabalan Manivasagam, Ming Liang, Bin Yang, Wenyuan
Zeng, James Tu, Raquel Urtasun
- Abstract summary: We use vehicle-to-vehicle (V2V) communication to improve the perception and motion forecasting performance of self-driving vehicles.
By intelligently aggregating the information received from multiple nearby vehicles, we can observe the same scene from different viewpoints.
- Score: 74.42961817119283
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we explore the use of vehicle-to-vehicle (V2V) communication
to improve the perception and motion forecasting performance of self-driving
vehicles. By intelligently aggregating the information received from multiple
nearby vehicles, we can observe the same scene from different viewpoints. This
allows us to see through occlusions and detect actors at long range, where the
observations are very sparse or non-existent. We also show that our approach of
sending compressed deep feature map activations achieves high accuracy while
satisfying communication bandwidth requirements.
Related papers
- Tapping in a Remote Vehicle's onboard LLM to Complement the Ego Vehicle's Field-of-View [1.701722696403793]
We propose a concept to complement the ego vehicle's field-of-view (FOV) with another vehicle's FOV by tapping into their onboard language models (LLMs)
Our results show that very recent versions of LLMs, such as GPT-4V and GPT-4o, understand a traffic situation to an impressive level of detail, and hence, they can be used even to spot traffic participants.
arXiv Detail & Related papers (2024-08-20T12:38:34Z) - Enhanced Cooperative Perception for Autonomous Vehicles Using Imperfect Communication [0.24466725954625887]
We propose a novel approach to realize an optimized Cooperative Perception (CP) under constrained communications.
At the core of our approach is recruiting the best helper from the available list of front vehicles to augment the visual range.
Our results demonstrate the efficacy of our two-step optimization process in improving the overall performance of cooperative perception.
arXiv Detail & Related papers (2024-04-10T15:37:15Z) - MSight: An Edge-Cloud Infrastructure-based Perception System for
Connected Automated Vehicles [58.461077944514564]
This paper presents MSight, a cutting-edge roadside perception system specifically designed for automated vehicles.
MSight offers real-time vehicle detection, localization, tracking, and short-term trajectory prediction.
Evaluations underscore the system's capability to uphold lane-level accuracy with minimal latency.
arXiv Detail & Related papers (2023-10-08T21:32:30Z) - Learning for Vehicle-to-Vehicle Cooperative Perception under Lossy
Communication [30.100647849646467]
We study the side effect (e.g., detection performance drop) by the lossy communication in the V2V Cooperative Perception.
We propose a novel intermediate LC-aware feature fusion method to relieve the side effect of lossy communication.
The proposed method is quite effective for the cooperative point cloud based 3D object detection under lossy V2V communication.
arXiv Detail & Related papers (2022-12-16T04:18:47Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - Safety-Oriented Pedestrian Motion and Scene Occupancy Forecasting [91.69900691029908]
We advocate for predicting both the individual motions as well as the scene occupancy map.
We propose a Scene-Actor Graph Neural Network (SA-GNN) which preserves the relative spatial information of pedestrians.
On two large-scale real-world datasets, we showcase that our scene-occupancy predictions are more accurate and better calibrated than those from state-of-the-art motion forecasting methods.
arXiv Detail & Related papers (2021-01-07T06:08:21Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z) - Two-Stream Networks for Lane-Change Prediction of Surrounding Vehicles [8.828423067460644]
In highway scenarios, an alert human driver will typically anticipate early cut-in and cut-out maneuvers surrounding vehicles using only visual cues.
To deal with lane-change recognition and prediction of surrounding vehicles, we pose the problem as an action recognition/prediction problem by stacking visual cues from video cameras.
Two video action recognition approaches are analyzed: two-stream convolutional networks and multiplier networks.
arXiv Detail & Related papers (2020-08-25T07:59:15Z) - Cooperative Perception with Deep Reinforcement Learning for Connected
Vehicles [7.7003495898919265]
We present a cooperative perception scheme with deep reinforcement learning to enhance the detection accuracy for the surrounding objects.
Our scheme mitigates the network load in vehicular communication networks and enhances the communication reliability.
arXiv Detail & Related papers (2020-04-23T01:44:12Z) - Parsing-based View-aware Embedding Network for Vehicle Re-Identification [138.11983486734576]
We propose a parsing-based view-aware embedding network (PVEN) to achieve the view-aware feature alignment and enhancement for vehicle ReID.
The experiments conducted on three datasets show that our model outperforms state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2020-04-10T13:06:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.