DeepPoint: A Deep Learning Model for 3D Reconstruction in Point Clouds
via mmWave Radar
- URL: http://arxiv.org/abs/2109.09188v1
- Date: Sun, 19 Sep 2021 18:28:20 GMT
- Title: DeepPoint: A Deep Learning Model for 3D Reconstruction in Point Clouds
via mmWave Radar
- Authors: Yue Sun, Honggang Zhang, Zhuoming Huang, and Benyuan Liu
- Abstract summary: We introduce in this paper DeepPoint, a deep learning model that generates 3D objects in point cloud format.
It takes as input the 2D depth images of an object generated by 3DRIMR's Stage 1, and outputs smooth and dense 3D point clouds of the object.
Our experiments have demonstrated that this model significantly outperforms the original 3DRIMR and other standard techniques in reconstructing 3D objects.
- Score: 10.119506666546663
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent research has shown that mmWave radar sensing is effective for object
detection in low visibility environments, which makes it an ideal technique in
autonomous navigation systems such as autonomous vehicles. However, due to the
characteristics of radar signals such as sparsity, low resolution, specularity,
and high noise, it is still quite challenging to reconstruct 3D object shapes
via mmWave radar sensing. Built on our recent proposed 3DRIMR (3D
Reconstruction and Imaging via mmWave Radar), we introduce in this paper
DeepPoint, a deep learning model that generates 3D objects in point cloud
format that significantly outperforms the original 3DRIMR design. The model
adopts a conditional Generative Adversarial Network (GAN) based deep neural
network architecture. It takes as input the 2D depth images of an object
generated by 3DRIMR's Stage 1, and outputs smooth and dense 3D point clouds of
the object. The model consists of a novel generator network that utilizes a
sequence of DeepPoint blocks or layers to extract essential features of the
union of multiple rough and sparse input point clouds of an object when
observed from various viewpoints, given that those input point clouds may
contain many incorrect points due to the imperfect generation process of
3DRIMR's Stage 1. The design of DeepPoint adopts a deep structure to capture
the global features of input point clouds, and it relies on an optimally chosen
number of DeepPoint blocks and skip connections to achieve performance
improvement over the original 3DRIMR design. Our experiments have demonstrated
that this model significantly outperforms the original 3DRIMR and other
standard techniques in reconstructing 3D objects.
Related papers
- StarNet: Style-Aware 3D Point Cloud Generation [82.30389817015877]
StarNet is able to reconstruct and generate high-fidelity and even 3D point clouds using a mapping network.
Our framework achieves comparable state-of-the-art performance on various metrics in the point cloud reconstruction and generation tasks.
arXiv Detail & Related papers (2023-03-28T08:21:44Z) - R2P: A Deep Learning Model from mmWave Radar to Point Cloud [14.803119281557995]
Radar to Point Cloud (R2P) is a deep learning model that generates smooth, dense, and highly accurate point cloud representation of a 3D object.
R2P replaces Stage 2 of our recently proposed 3DRIMR (3D Reconstruction and Imaging via mmWave Radar) system.
arXiv Detail & Related papers (2022-07-21T18:01:05Z) - RBGNet: Ray-based Grouping for 3D Object Detection [104.98776095895641]
We propose the RBGNet framework, a voting-based 3D detector for accurate 3D object detection from point clouds.
We propose a ray-based feature grouping module, which aggregates the point-wise features on object surfaces using a group of determined rays.
Our model achieves state-of-the-art 3D detection performance on ScanNet V2 and SUN RGB-D with remarkable performance gains.
arXiv Detail & Related papers (2022-04-05T14:42:57Z) - CG-SSD: Corner Guided Single Stage 3D Object Detection from LiDAR Point
Cloud [4.110053032708927]
In a real world scene, the LiDAR can only acquire a limited object surface point clouds, but the center point of the object does not exist.
We propose a corner-guided anchor-free single-stage 3D object detection model (CG-SSD)
CG-SSD achieves the state-of-art performance on the ONCE benchmark for supervised 3D object detection using single frame point cloud data.
arXiv Detail & Related papers (2022-02-24T02:30:15Z) - Deep Point Cloud Reconstruction [74.694733918351]
Point cloud obtained from 3D scanning is often sparse, noisy, and irregular.
To cope with these issues, recent studies have been separately conducted to densify, denoise, and complete inaccurate point cloud.
We propose a deep point cloud reconstruction network consisting of two stages: 1) a 3D sparse stacked-hourglass network as for the initial densification and denoising, 2) a refinement via transformers converting the discrete voxels into 3D points.
arXiv Detail & Related papers (2021-11-23T07:53:28Z) - Cylindrical and Asymmetrical 3D Convolution Networks for LiDAR-based
Perception [122.53774221136193]
State-of-the-art methods for driving-scene LiDAR-based perception often project the point clouds to 2D space and then process them via 2D convolution.
A natural remedy is to utilize the 3D voxelization and 3D convolution network.
We propose a new framework for the outdoor LiDAR segmentation, where cylindrical partition and asymmetrical 3D convolution networks are designed to explore the 3D geometric pattern.
arXiv Detail & Related papers (2021-09-12T06:25:11Z) - Anchor-free 3D Single Stage Detector with Mask-Guided Attention for
Point Cloud [79.39041453836793]
We develop a novel single-stage 3D detector for point clouds in an anchor-free manner.
We overcome this by converting the voxel-based sparse 3D feature volumes into the sparse 2D feature maps.
We propose an IoU-based detection confidence re-calibration scheme to improve the correlation between the detection confidence score and the accuracy of the bounding box regression.
arXiv Detail & Related papers (2021-08-08T13:42:13Z) - 3DRIMR: 3D Reconstruction and Imaging via mmWave Radar based on Deep
Learning [9.26903816093995]
mmWave radar has been shown as an effective sensing technique in low visibility, smoke, dusty, and dense fog environment.
We propose 3D Reconstruction and Imaging via mmWave Radar (3DRIMR), a deep learning based architecture that reconstructs 3D shape of an object in dense detailed point cloud format.
Our experiments have demonstrated 3DRIMR's effectiveness in reconstructing 3D objects, and its performance improvement over standard techniques.
arXiv Detail & Related papers (2021-08-05T21:24:57Z) - PC-DAN: Point Cloud based Deep Affinity Network for 3D Multi-Object
Tracking (Accepted as an extended abstract in JRDB-ACT Workshop at CVPR21) [68.12101204123422]
A point cloud is a dense compilation of spatial data in 3D coordinates.
We propose a PointNet-based approach for 3D Multi-Object Tracking (MOT)
arXiv Detail & Related papers (2021-06-03T05:36:39Z) - VR3Dense: Voxel Representation Learning for 3D Object Detection and
Monocular Dense Depth Reconstruction [0.951828574518325]
We introduce a method for jointly training 3D object detection and monocular dense depth reconstruction neural networks.
It takes as inputs, a LiDAR point-cloud, and a single RGB image during inference and produces object pose predictions as well as a densely reconstructed depth map.
While our object detection is trained in a supervised manner, the depth prediction network is trained with both self-supervised and supervised loss functions.
arXiv Detail & Related papers (2021-04-13T04:25:54Z) - Stereo RGB and Deeper LIDAR Based Network for 3D Object Detection [40.34710686994996]
3D object detection has become an emerging task in autonomous driving scenarios.
Previous works process 3D point clouds using either projection-based or voxel-based models.
We propose the Stereo RGB and Deeper LIDAR framework which can utilize semantic and spatial information simultaneously.
arXiv Detail & Related papers (2020-06-09T11:19:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.