Pointillism: Accurate 3D bounding box estimation with multi-radars
- URL: http://arxiv.org/abs/2203.04440v1
- Date: Tue, 8 Mar 2022 23:09:58 GMT
- Title: Pointillism: Accurate 3D bounding box estimation with multi-radars
- Authors: Kshitiz Bansal, Keshav Rungta, Siyuan Zhu, Dinesh Bharadia
- Abstract summary: We introduce Pointillism, a system that combines data from multiple spatially separated radars with an optimal separation to mitigate these problems.
We present the design of RP-net, a novel deep learning architecture, designed explicitly for radar's sparse data distribution, to enable accurate 3D bounding box estimation.
- Score: 6.59119432945925
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Autonomous perception requires high-quality environment sensing in the form
of 3D bounding boxes of dynamic objects. The primary sensors used in automotive
systems are light-based cameras and LiDARs. However, they are known to fail in
adverse weather conditions. Radars can potentially solve this problem as they
are barely affected by adverse weather conditions. However, specular
reflections of wireless signals cause poor performance of radar point clouds.
We introduce Pointillism, a system that combines data from multiple spatially
separated radars with an optimal separation to mitigate these problems. We
introduce a novel concept of Cross Potential Point Clouds, which uses the
spatial diversity induced by multiple radars and solves the problem of noise
and sparsity in radar point clouds. Furthermore, we present the design of
RP-net, a novel deep learning architecture, designed explicitly for radar's
sparse data distribution, to enable accurate 3D bounding box estimation. The
spatial techniques designed and proposed in this paper are fundamental to
radars point cloud distribution and would benefit other radar sensing
applications.
Related papers
- RadarOcc: Robust 3D Occupancy Prediction with 4D Imaging Radar [15.776076554141687]
3D occupancy-based perception pipeline has significantly advanced autonomous driving.
Current methods rely on LiDAR or camera inputs for 3D occupancy prediction.
We introduce a novel approach that utilizes 4D imaging radar sensors for 3D occupancy prediction.
arXiv Detail & Related papers (2024-05-22T21:48:17Z) - Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - Diffusion-Based Point Cloud Super-Resolution for mmWave Radar Data [8.552647576661174]
millimeter-wave radar sensor maintains stable performance under adverse environmental conditions.
Radar point clouds are relatively sparse and contain massive ghost points.
We propose a novel point cloud super-resolution approach for 3D mmWave radar data, named Radar-diffusion.
arXiv Detail & Related papers (2024-04-09T04:41:05Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - RadarFormer: Lightweight and Accurate Real-Time Radar Object Detection
Model [13.214257841152033]
Radar-centric data sets do not get a lot of attention in the development of deep learning techniques for radar perception.
We propose a transformers-based model, named RadarFormer, that utilizes state-of-the-art developments in vision deep learning.
Our model also introduces a channel-chirp-time merging module that reduces the size and complexity of our models by more than 10 times without compromising accuracy.
arXiv Detail & Related papers (2023-04-17T17:07:35Z) - Radar Occupancy Prediction with Lidar Supervision while Preserving
Long-Range Sensing and Penetrating Capabilities [10.133923613929575]
Recent works have made enormous progress in classifying free and occupied spaces in radar images by leveraging lidar label supervision.
The sensing distance of the results is limited by the sensing range of lidar.
Some objects visible to lidar are invisible to radar, and some objects occluded in lidar scans are visible in radar images because of the radar's penetrating capability.
arXiv Detail & Related papers (2021-12-08T13:38:58Z) - Complex-valued Convolutional Neural Networks for Enhanced Radar Signal
Denoising and Interference Mitigation [73.0103413636673]
We propose the use of Complex-Valued Convolutional Neural Networks (CVCNNs) to address the issue of mutual interference between radar sensors.
CVCNNs increase data efficiency, speeds up network training and substantially improves the conservation of phase information during interference removal.
arXiv Detail & Related papers (2021-04-29T10:06:29Z) - LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar
Fusion [52.59664614744447]
We present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and high definition (HD) maps.
automotive radar provides rich, complementary information, allowing for longer range vehicle detection as well as instantaneous velocity measurements.
arXiv Detail & Related papers (2020-10-02T00:13:00Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.