Synthesizing and Identifying Noise Levels in Autonomous Vehicle Camera Radar Datasets
- URL: http://arxiv.org/abs/2505.00584v1
- Date: Thu, 01 May 2025 15:15:50 GMT
- Title: Synthesizing and Identifying Noise Levels in Autonomous Vehicle Camera Radar Datasets
- Authors: Mathis Morales, Golnaz Habibi,
- Abstract summary: We create a realistic synthetic data augmentation pipeline for camera-radar Autonomous Vehicle datasets.<n>Our goal is to accurately simulate sensor failures and data deterioration due to real-world interferences.<n>We also present our results of a baseline lightweight Noise Recognition neural network trained and tested on our augmented dataset.
- Score: 2.435006380732195
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Detecting and tracking objects is a crucial component of any autonomous navigation method. For the past decades, object detection has yielded promising results using neural networks on various datasets. While many methods focus on performance metrics, few projects focus on improving the robustness of these detection and tracking pipelines, notably to sensor failures. In this paper we attempt to address this issue by creating a realistic synthetic data augmentation pipeline for camera-radar Autonomous Vehicle (AV) datasets. Our goal is to accurately simulate sensor failures and data deterioration due to real-world interferences. We also present our results of a baseline lightweight Noise Recognition neural network trained and tested on our augmented dataset, reaching an overall recognition accuracy of 54.4\% on 11 categories across 10086 images and 2145 radar point-clouds.
Related papers
- Multi-Object Tracking based on Imaging Radar 3D Object Detection [0.13499500088995461]
This paper presents an approach for tracking surrounding traffic participants with a classical tracking algorithm.
Learning based object detectors have been shown to work adequately on lidar and camera data, while learning based object detectors using standard radar data input have proven to be inferior.
With the improvements to radar sensor technology in the form of imaging radars, the object detection performance on radar was greatly improved but is still limited compared to lidar sensors due to the sparsity of the radar point cloud.
The tracking algorithm must overcome the limited detection quality while generating consistent tracks.
arXiv Detail & Related papers (2024-06-03T05:46:23Z) - OOSTraj: Out-of-Sight Trajectory Prediction With Vision-Positioning Denoising [49.86409475232849]
Trajectory prediction is fundamental in computer vision and autonomous driving.
Existing approaches in this field often assume precise and complete observational data.
We present a novel method for out-of-sight trajectory prediction that leverages a vision-positioning technique.
arXiv Detail & Related papers (2024-04-02T18:30:29Z) - Super-Resolution Radar Imaging with Sparse Arrays Using a Deep Neural
Network Trained with Enhanced Virtual Data [0.4640835690336652]
This paper introduces a method based on a deep neural network (DNN) that is perfectly capable of processing radar data from extremely thinned radar apertures.
The proposed DNN processing can provide both aliasing-free radar imaging and super-resolution.
It simultaneously delivers nearly the same resolution and image quality as would be achieved with a fully occupied array.
arXiv Detail & Related papers (2023-06-16T13:37:47Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - Collaborative Learning with a Drone Orchestrator [79.75113006257872]
A swarm of intelligent wireless devices train a shared neural network model with the help of a drone.
The proposed framework achieves a significant speedup in training, leading to an average 24% and 87% saving in the drone hovering time.
arXiv Detail & Related papers (2023-03-03T23:46:25Z) - The LuViRA Dataset: Synchronized Vision, Radio, and Audio Sensors for Indoor Localization [41.58739817444644]
The dataset includes color images, corresponding depth maps, inertial measurement unit (IMU) readings, channel response between a 5G massive multiple-input and multiple-output (MIMO) testbed and user equipment.
We synchronize these sensors to ensure that all data is recorded simultaneously.
The main aim of this dataset is to enable research on sensor fusion with the most commonly used sensors for localization tasks.
arXiv Detail & Related papers (2023-02-10T15:12:40Z) - SimuShips -- A High Resolution Simulation Dataset for Ship Detection
with Precise Annotations [0.0]
State-of-the-art obstacle detection algorithms are based on convolutional neural networks (CNNs)
SimuShips is a publicly available simulation-based dataset for maritime environments.
arXiv Detail & Related papers (2022-09-22T07:33:31Z) - Complex-valued Convolutional Neural Networks for Enhanced Radar Signal
Denoising and Interference Mitigation [73.0103413636673]
We propose the use of Complex-Valued Convolutional Neural Networks (CVCNNs) to address the issue of mutual interference between radar sensors.
CVCNNs increase data efficiency, speeds up network training and substantially improves the conservation of phase information during interference removal.
arXiv Detail & Related papers (2021-04-29T10:06:29Z) - On the Role of Sensor Fusion for Object Detection in Future Vehicular
Networks [25.838878314196375]
We evaluate how using a combination of different sensors affects the detection of the environment in which the vehicles move and operate.
The final objective is to identify the optimal setup that would minimize the amount of data to be distributed over the channel.
arXiv Detail & Related papers (2021-04-23T18:58:37Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z) - Learning Camera Miscalibration Detection [83.38916296044394]
This paper focuses on a data-driven approach to learn the detection of miscalibration in vision sensors, specifically RGB cameras.
Our contributions include a proposed miscalibration metric for RGB cameras and a novel semi-synthetic dataset generation pipeline based on this metric.
By training a deep convolutional neural network, we demonstrate the effectiveness of our pipeline to identify whether a recalibration of the camera's intrinsic parameters is required or not.
arXiv Detail & Related papers (2020-05-24T10:32:49Z) - Extraction and Assessment of Naturalistic Human Driving Trajectories
from Infrastructure Camera and Radar Sensors [0.0]
We present a novel methodology to extract trajectories of traffic objects using infrastructure sensors.
Our vision pipeline accurately detects objects, fuses camera and radar detections and tracks them over time.
We show that our sensor fusion approach successfully combines the advantages of camera and radar detections and outperforms either single sensor.
arXiv Detail & Related papers (2020-04-02T22:28:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.