R2P: A Deep Learning Model from mmWave Radar to Point Cloud
- URL: http://arxiv.org/abs/2207.10690v1
- Date: Thu, 21 Jul 2022 18:01:05 GMT
- Title: R2P: A Deep Learning Model from mmWave Radar to Point Cloud
- Authors: Yue Sun, Honggang Zhang, Zhuoming Huang, and Benyuan Liu
- Abstract summary: Radar to Point Cloud (R2P) is a deep learning model that generates smooth, dense, and highly accurate point cloud representation of a 3D object.
R2P replaces Stage 2 of our recently proposed 3DRIMR (3D Reconstruction and Imaging via mmWave Radar) system.
- Score: 14.803119281557995
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent research has shown the effectiveness of mmWave radar sensing for
object detection in low visibility environments, which makes it an ideal
technique in autonomous navigation systems. In this paper, we introduce Radar
to Point Cloud (R2P), a deep learning model that generates smooth, dense, and
highly accurate point cloud representation of a 3D object with fine geometry
details, based on rough and sparse point clouds with incorrect points obtained
from mmWave radar. These input point clouds are converted from the 2D depth
images that are generated from raw mmWave radar sensor data, characterized by
inconsistency, and orientation and shape errors. R2P utilizes an architecture
of two sequential deep learning encoder-decoder blocks to extract the essential
features of those radar-based input point clouds of an object when observed
from multiple viewpoints, and to ensure the internal consistency of a generated
output point cloud and its accurate and detailed shape reconstruction of the
original object. We implement R2P to replace Stage 2 of our recently proposed
3DRIMR (3D Reconstruction and Imaging via mmWave Radar) system. Our experiments
demonstrate the significant performance improvement of R2P over the popular
existing methods such as PointNet, PCN, and the original 3DRIMR design.
Related papers
- GET-UP: GEomeTric-aware Depth Estimation with Radar Points UPsampling [7.90238039959534]
Existing algorithms process radar data by projecting 3D points onto the image plane for pixel-level feature extraction.
We propose GET-UP, leveraging attention-enhanced Graph Neural Networks (GNN) to exchange and aggregate both 2D and 3D information from radar data.
We benchmark our proposed GET-UP on the nuScenes dataset, achieving state-of-the-art performance with a 15.3% and 14.7% improvement in MAE and RMSE over the previously best-performing model.
arXiv Detail & Related papers (2024-09-02T14:15:09Z) - PointRegGPT: Boosting 3D Point Cloud Registration using Generative Point-Cloud Pairs for Training [90.06520673092702]
We present PointRegGPT, boosting 3D point cloud registration using generative point-cloud pairs for training.
To our knowledge, this is the first generative approach that explores realistic data generation for indoor point cloud registration.
arXiv Detail & Related papers (2024-07-19T06:29:57Z) - Diffusion-Based Point Cloud Super-Resolution for mmWave Radar Data [8.552647576661174]
millimeter-wave radar sensor maintains stable performance under adverse environmental conditions.
Radar point clouds are relatively sparse and contain massive ghost points.
We propose a novel point cloud super-resolution approach for 3D mmWave radar data, named Radar-diffusion.
arXiv Detail & Related papers (2024-04-09T04:41:05Z) - 3D Reconstruction of Multiple Objects by mmWave Radar on UAV [15.47494720280318]
We explore the feasibility of utilizing a mmWave radar sensor installed on a UAV to reconstruct the 3D shapes of multiple objects in a space.
The UAV hovers at various locations in the space, and its onboard radar senor collects raw radar data via scanning the space with Synthetic Aperture Radar (SAR) operation.
The radar data is sent to a deep neural network model, which outputs the point cloud reconstruction of the multiple objects in the space.
arXiv Detail & Related papers (2022-11-03T21:23:36Z) - A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud
Completion [69.32451612060214]
Real-scanned 3D point clouds are often incomplete, and it is important to recover complete point clouds for downstream applications.
Most existing point cloud completion methods use Chamfer Distance (CD) loss for training.
We propose a novel Point Diffusion-Refinement (PDR) paradigm for point cloud completion.
arXiv Detail & Related papers (2021-12-07T06:59:06Z) - Deep Point Cloud Reconstruction [74.694733918351]
Point cloud obtained from 3D scanning is often sparse, noisy, and irregular.
To cope with these issues, recent studies have been separately conducted to densify, denoise, and complete inaccurate point cloud.
We propose a deep point cloud reconstruction network consisting of two stages: 1) a 3D sparse stacked-hourglass network as for the initial densification and denoising, 2) a refinement via transformers converting the discrete voxels into 3D points.
arXiv Detail & Related papers (2021-11-23T07:53:28Z) - DeepPoint: A Deep Learning Model for 3D Reconstruction in Point Clouds
via mmWave Radar [10.119506666546663]
We introduce in this paper DeepPoint, a deep learning model that generates 3D objects in point cloud format.
It takes as input the 2D depth images of an object generated by 3DRIMR's Stage 1, and outputs smooth and dense 3D point clouds of the object.
Our experiments have demonstrated that this model significantly outperforms the original 3DRIMR and other standard techniques in reconstructing 3D objects.
arXiv Detail & Related papers (2021-09-19T18:28:20Z) - 3DRIMR: 3D Reconstruction and Imaging via mmWave Radar based on Deep
Learning [9.26903816093995]
mmWave radar has been shown as an effective sensing technique in low visibility, smoke, dusty, and dense fog environment.
We propose 3D Reconstruction and Imaging via mmWave Radar (3DRIMR), a deep learning based architecture that reconstructs 3D shape of an object in dense detailed point cloud format.
Our experiments have demonstrated 3DRIMR's effectiveness in reconstructing 3D objects, and its performance improvement over standard techniques.
arXiv Detail & Related papers (2021-08-05T21:24:57Z) - PC-DAN: Point Cloud based Deep Affinity Network for 3D Multi-Object
Tracking (Accepted as an extended abstract in JRDB-ACT Workshop at CVPR21) [68.12101204123422]
A point cloud is a dense compilation of spatial data in 3D coordinates.
We propose a PointNet-based approach for 3D Multi-Object Tracking (MOT)
arXiv Detail & Related papers (2021-06-03T05:36:39Z) - ParaNet: Deep Regular Representation for 3D Point Clouds [62.81379889095186]
ParaNet is a novel end-to-end deep learning framework for representing 3D point clouds.
It converts an irregular 3D point cloud into a regular 2D color image, named point geometry image (PGI)
In contrast to conventional regular representation modalities based on multi-view projection and voxelization, the proposed representation is differentiable and reversible.
arXiv Detail & Related papers (2020-12-05T13:19:55Z) - Cross-Modality 3D Object Detection [63.29935886648709]
We present a novel two-stage multi-modal fusion network for 3D object detection.
The whole architecture facilitates two-stage fusion.
Our experiments on the KITTI dataset show that the proposed multi-stage fusion helps the network to learn better representations.
arXiv Detail & Related papers (2020-08-16T11:01:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.