3DRIMR: 3D Reconstruction and Imaging via mmWave Radar based on Deep
Learning
- URL: http://arxiv.org/abs/2108.02858v1
- Date: Thu, 5 Aug 2021 21:24:57 GMT
- Title: 3DRIMR: 3D Reconstruction and Imaging via mmWave Radar based on Deep
Learning
- Authors: Yue Sun, Zhuoming Huang, Honggang Zhang, Zhi Cao, Deqiang Xu
- Abstract summary: mmWave radar has been shown as an effective sensing technique in low visibility, smoke, dusty, and dense fog environment.
We propose 3D Reconstruction and Imaging via mmWave Radar (3DRIMR), a deep learning based architecture that reconstructs 3D shape of an object in dense detailed point cloud format.
Our experiments have demonstrated 3DRIMR's effectiveness in reconstructing 3D objects, and its performance improvement over standard techniques.
- Score: 9.26903816093995
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: mmWave radar has been shown as an effective sensing technique in low
visibility, smoke, dusty, and dense fog environment. However tapping the
potential of radar sensing to reconstruct 3D object shapes remains a great
challenge, due to the characteristics of radar data such as sparsity, low
resolution, specularity, high noise, and multi-path induced shadow reflections
and artifacts. In this paper we propose 3D Reconstruction and Imaging via
mmWave Radar (3DRIMR), a deep learning based architecture that reconstructs 3D
shape of an object in dense detailed point cloud format, based on sparse raw
mmWave radar intensity data. The architecture consists of two back-to-back
conditional GAN deep neural networks: the first generator network generates 2D
depth images based on raw radar intensity data, and the second generator
network outputs 3D point clouds based on the results of the first generator.
The architecture exploits both convolutional neural network's convolutional
operation (that extracts local structure neighborhood information) and the
efficiency and detailed geometry capture capability of point clouds (other than
costly voxelization of 3D space or distance fields). Our experiments have
demonstrated 3DRIMR's effectiveness in reconstructing 3D objects, and its
performance improvement over standard techniques.
Related papers
- VFMM3D: Releasing the Potential of Image by Vision Foundation Model for Monocular 3D Object Detection [80.62052650370416]
monocular 3D object detection holds significant importance across various applications, including autonomous driving and robotics.
In this paper, we present VFMM3D, an innovative framework that leverages the capabilities of Vision Foundation Models (VFMs) to accurately transform single-view images into LiDAR point cloud representations.
arXiv Detail & Related papers (2024-04-15T03:12:12Z) - NeRF-GAN Distillation for Efficient 3D-Aware Generation with
Convolutions [97.27105725738016]
integration of Neural Radiance Fields (NeRFs) and generative models, such as Generative Adversarial Networks (GANs) has transformed 3D-aware generation from single-view images.
We propose a simple and effective method, based on re-using the well-disentangled latent space of a pre-trained NeRF-GAN in a pose-conditioned convolutional network to directly generate 3D-consistent images corresponding to the underlying 3D representations.
arXiv Detail & Related papers (2023-03-22T18:59:48Z) - 3D Reconstruction of Multiple Objects by mmWave Radar on UAV [15.47494720280318]
We explore the feasibility of utilizing a mmWave radar sensor installed on a UAV to reconstruct the 3D shapes of multiple objects in a space.
The UAV hovers at various locations in the space, and its onboard radar senor collects raw radar data via scanning the space with Synthetic Aperture Radar (SAR) operation.
The radar data is sent to a deep neural network model, which outputs the point cloud reconstruction of the multiple objects in the space.
arXiv Detail & Related papers (2022-11-03T21:23:36Z) - Bridged Transformer for Vision and Point Cloud 3D Object Detection [92.86856146086316]
Bridged Transformer (BrT) is an end-to-end architecture for 3D object detection.
BrT learns to identify 3D and 2D object bounding boxes from both points and image patches.
We experimentally show that BrT surpasses state-of-the-art methods on SUN RGB-D and ScanNetV2 datasets.
arXiv Detail & Related papers (2022-10-04T05:44:22Z) - mmBody Benchmark: 3D Body Reconstruction Dataset and Analysis for
Millimeter Wave Radar [10.610455816814985]
Millimeter Wave (mmWave) Radar is gaining popularity as it can work in adverse environments like smoke, rain, snow, poor lighting, etc.
Prior work has explored the possibility of reconstructing 3D skeletons or meshes from the noisy and sparse mmWave Radar signals.
This dataset consists of synchronized and calibrated mmWave radar point clouds and RGB(D) images in different scenes and skeleton/mesh annotations for humans in the scenes.
arXiv Detail & Related papers (2022-09-12T08:00:31Z) - R2P: A Deep Learning Model from mmWave Radar to Point Cloud [14.803119281557995]
Radar to Point Cloud (R2P) is a deep learning model that generates smooth, dense, and highly accurate point cloud representation of a 3D object.
R2P replaces Stage 2 of our recently proposed 3DRIMR (3D Reconstruction and Imaging via mmWave Radar) system.
arXiv Detail & Related papers (2022-07-21T18:01:05Z) - VPFNet: Improving 3D Object Detection with Virtual Point based LiDAR and
Stereo Data Fusion [62.24001258298076]
VPFNet is a new architecture that cleverly aligns and aggregates the point cloud and image data at the virtual' points.
Our VPFNet achieves 83.21% moderate 3D AP and 91.86% moderate BEV AP on the KITTI test set, ranking the 1st since May 21th, 2021.
arXiv Detail & Related papers (2021-11-29T08:51:20Z) - DeepPoint: A Deep Learning Model for 3D Reconstruction in Point Clouds
via mmWave Radar [10.119506666546663]
We introduce in this paper DeepPoint, a deep learning model that generates 3D objects in point cloud format.
It takes as input the 2D depth images of an object generated by 3DRIMR's Stage 1, and outputs smooth and dense 3D point clouds of the object.
Our experiments have demonstrated that this model significantly outperforms the original 3DRIMR and other standard techniques in reconstructing 3D objects.
arXiv Detail & Related papers (2021-09-19T18:28:20Z) - PC-DAN: Point Cloud based Deep Affinity Network for 3D Multi-Object
Tracking (Accepted as an extended abstract in JRDB-ACT Workshop at CVPR21) [68.12101204123422]
A point cloud is a dense compilation of spatial data in 3D coordinates.
We propose a PointNet-based approach for 3D Multi-Object Tracking (MOT)
arXiv Detail & Related papers (2021-06-03T05:36:39Z) - VR3Dense: Voxel Representation Learning for 3D Object Detection and
Monocular Dense Depth Reconstruction [0.951828574518325]
We introduce a method for jointly training 3D object detection and monocular dense depth reconstruction neural networks.
It takes as inputs, a LiDAR point-cloud, and a single RGB image during inference and produces object pose predictions as well as a densely reconstructed depth map.
While our object detection is trained in a supervised manner, the depth prediction network is trained with both self-supervised and supervised loss functions.
arXiv Detail & Related papers (2021-04-13T04:25:54Z) - Cross-Modality 3D Object Detection [63.29935886648709]
We present a novel two-stage multi-modal fusion network for 3D object detection.
The whole architecture facilitates two-stage fusion.
Our experiments on the KITTI dataset show that the proposed multi-stage fusion helps the network to learn better representations.
arXiv Detail & Related papers (2020-08-16T11:01:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.