On the Role of Sensor Fusion for Object Detection in Future Vehicular
Networks
- URL: http://arxiv.org/abs/2104.11785v1
- Date: Fri, 23 Apr 2021 18:58:37 GMT
- Title: On the Role of Sensor Fusion for Object Detection in Future Vehicular
Networks
- Authors: Valentina Rossi, Paolo Testolina, Marco Giordani, Michele Zorzi
- Abstract summary: We evaluate how using a combination of different sensors affects the detection of the environment in which the vehicles move and operate.
The final objective is to identify the optimal setup that would minimize the amount of data to be distributed over the channel.
- Score: 25.838878314196375
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fully autonomous driving systems require fast detection and recognition of
sensitive objects in the environment. In this context, intelligent vehicles
should share their sensor data with computing platforms and/or other vehicles,
to detect objects beyond their own sensors' fields of view. However, the
resulting huge volumes of data to be exchanged can be challenging to handle for
standard communication technologies. In this paper, we evaluate how using a
combination of different sensors affects the detection of the environment in
which the vehicles move and operate. The final objective is to identify the
optimal setup that would minimize the amount of data to be distributed over the
channel, with negligible degradation in terms of object detection accuracy. To
this aim, we extend an already available object detection algorithm so that it
can consider, as an input, camera images, LiDAR point clouds, or a combination
of the two, and compare the accuracy performance of the different approaches
using two realistic datasets. Our results show that, although sensor fusion
always achieves more accurate detections, LiDAR only inputs can obtain similar
results for large objects while mitigating the burden on the channel.
Related papers
- Learning 3D Perception from Others' Predictions [64.09115694891679]
We investigate a new scenario to construct 3D object detectors: learning from the predictions of a nearby unit that is equipped with an accurate detector.
For example, when a self-driving car enters a new area, it may learn from other traffic participants whose detectors have been optimized for that area.
arXiv Detail & Related papers (2024-10-03T16:31:28Z) - OOSTraj: Out-of-Sight Trajectory Prediction With Vision-Positioning Denoising [49.86409475232849]
Trajectory prediction is fundamental in computer vision and autonomous driving.
Existing approaches in this field often assume precise and complete observational data.
We present a novel method for out-of-sight trajectory prediction that leverages a vision-positioning technique.
arXiv Detail & Related papers (2024-04-02T18:30:29Z) - Multimodal Dataset from Harsh Sub-Terranean Environment with Aerosol
Particles for Frontier Exploration [55.41644538483948]
This paper introduces a multimodal dataset from the harsh and unstructured underground environment with aerosol particles.
It contains synchronized raw data measurements from all onboard sensors in Robot Operating System (ROS) format.
The focus of this paper is not only to capture both temporal and spatial data diversities but also to present the impact of harsh conditions on captured data.
arXiv Detail & Related papers (2023-04-27T20:21:18Z) - Edge-Aided Sensor Data Sharing in Vehicular Communication Networks [8.67588704947974]
We consider sensor data sharing and fusion in a vehicular network with both, vehicle-to-infrastructure and vehicle-to-vehicle communication.
We propose a method, named Bidirectional Feedback Noise Estimation (BiFNoE), in which an edge server collects and caches sensor measurement data from vehicles.
We show that the perception accuracy is on average improved by around 80 % with only 12 kbps uplink and 28 kbps downlink bandwidth.
arXiv Detail & Related papers (2022-06-17T16:30:56Z) - Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object
Detection [58.81316192862618]
Two critical sensors for 3D perception in autonomous driving are the camera and the LiDAR.
fusing these two modalities can significantly boost the performance of 3D perception models.
We benchmark the state-of-the-art fusion methods for the first time.
arXiv Detail & Related papers (2022-05-30T09:35:37Z) - Comparative study of 3D object detection frameworks based on LiDAR data
and sensor fusion techniques [0.0]
The perception system plays a significant role in providing an accurate interpretation of a vehicle's environment in real-time.
Deep learning techniques transform the huge amount of data from the sensors into semantic information.
3D object detection methods, by utilizing the additional pose data from the sensors such as LiDARs, stereo cameras, provides information on the size and location of the object.
arXiv Detail & Related papers (2022-02-05T09:34:58Z) - Detecting and Identifying Optical Signal Attacks on Autonomous Driving
Systems [25.32946739108013]
We propose a framework to detect and identify sensors that are under attack.
Specifically, we first develop a new technique to detect attacks on a system that consists of three sensors.
In our study, we use real data sets and the state-of-the-art machine learning model to evaluate our attack detection scheme.
arXiv Detail & Related papers (2021-10-20T12:21:04Z) - Radar Voxel Fusion for 3D Object Detection [0.0]
This paper develops a low-level sensor fusion network for 3D object detection.
The radar sensor fusion proves especially beneficial in inclement conditions such as rain and night scenes.
arXiv Detail & Related papers (2021-06-26T20:34:12Z) - Automatic Extrinsic Calibration Method for LiDAR and Camera Sensor
Setups [68.8204255655161]
We present a method to calibrate the parameters of any pair of sensors involving LiDARs, monocular or stereo cameras.
The proposed approach can handle devices with very different resolutions and poses, as usually found in vehicle setups.
arXiv Detail & Related papers (2021-01-12T12:02:26Z) - High-Precision Digital Traffic Recording with Multi-LiDAR Infrastructure
Sensor Setups [0.0]
We investigate the impact of fused LiDAR point clouds compared to single LiDAR point clouds.
The evaluation of the extracted trajectories shows that a fused infrastructure approach significantly increases the tracking results and reaches accuracies within a few centimeters.
arXiv Detail & Related papers (2020-06-22T10:57:52Z) - Learning Camera Miscalibration Detection [83.38916296044394]
This paper focuses on a data-driven approach to learn the detection of miscalibration in vision sensors, specifically RGB cameras.
Our contributions include a proposed miscalibration metric for RGB cameras and a novel semi-synthetic dataset generation pipeline based on this metric.
By training a deep convolutional neural network, we demonstrate the effectiveness of our pipeline to identify whether a recalibration of the camera's intrinsic parameters is required or not.
arXiv Detail & Related papers (2020-05-24T10:32:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.