3D Reconstruction of Multiple Objects by mmWave Radar on UAV
- URL: http://arxiv.org/abs/2211.02150v1
- Date: Thu, 3 Nov 2022 21:23:36 GMT
- Title: 3D Reconstruction of Multiple Objects by mmWave Radar on UAV
- Authors: Yue Sun, Zhuoming Huang, Honggang Zhang, Xiaohui Liang
- Abstract summary: We explore the feasibility of utilizing a mmWave radar sensor installed on a UAV to reconstruct the 3D shapes of multiple objects in a space.
The UAV hovers at various locations in the space, and its onboard radar senor collects raw radar data via scanning the space with Synthetic Aperture Radar (SAR) operation.
The radar data is sent to a deep neural network model, which outputs the point cloud reconstruction of the multiple objects in the space.
- Score: 15.47494720280318
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we explore the feasibility of utilizing a mmWave radar sensor
installed on a UAV to reconstruct the 3D shapes of multiple objects in a space.
The UAV hovers at various locations in the space, and its onboard radar senor
collects raw radar data via scanning the space with Synthetic Aperture Radar
(SAR) operation. The radar data is sent to a deep neural network model, which
outputs the point cloud reconstruction of the multiple objects in the space. We
evaluate two different models. Model 1 is our recently proposed 3DRIMR/R2P
model, and Model 2 is formed by adding a segmentation stage in the processing
pipeline of Model 1. Our experiments have demonstrated that both models are
promising in solving the multiple object reconstruction problem. We also show
that Model 2, despite producing denser and smoother point clouds, can lead to
higher reconstruction loss or even loss of objects. In addition, we find that
both models are robust to the highly noisy radar data obtained by unstable SAR
operation due to the instability or vibration of a small UAV hovering at its
intended scanning point. Our exploratory study has shown a promising direction
of applying mmWave radar sensing in 3D object reconstruction.
Related papers
- UniBEVFusion: Unified Radar-Vision BEVFusion for 3D Object Detection [2.123197540438989]
Many radar-vision fusion models treat radar as a sparse LiDAR, underutilizing radar-specific information.
We propose the Radar Depth Lift-Splat-Shoot (RDL) module, which integrates radar-specific data into the depth prediction process.
We also introduce a Unified Feature Fusion (UFF) approach that extracts BEV features across different modalities.
arXiv Detail & Related papers (2024-09-23T06:57:27Z) - VFMM3D: Releasing the Potential of Image by Vision Foundation Model for Monocular 3D Object Detection [80.62052650370416]
monocular 3D object detection holds significant importance across various applications, including autonomous driving and robotics.
In this paper, we present VFMM3D, an innovative framework that leverages the capabilities of Vision Foundation Models (VFMs) to accurately transform single-view images into LiDAR point cloud representations.
arXiv Detail & Related papers (2024-04-15T03:12:12Z) - Diffusion-Based Point Cloud Super-Resolution for mmWave Radar Data [8.552647576661174]
millimeter-wave radar sensor maintains stable performance under adverse environmental conditions.
Radar point clouds are relatively sparse and contain massive ghost points.
We propose a novel point cloud super-resolution approach for 3D mmWave radar data, named Radar-diffusion.
arXiv Detail & Related papers (2024-04-09T04:41:05Z) - Reviewing 3D Object Detectors in the Context of High-Resolution 3+1D
Radar [0.7279730418361995]
High-resolution imaging 4D (3+1D) radar sensors have deep learning-based radar perception research.
We investigate deep learning-based models operating on radar point clouds for 3D object detection.
arXiv Detail & Related papers (2023-08-10T10:10:43Z) - Bi-LRFusion: Bi-Directional LiDAR-Radar Fusion for 3D Dynamic Object
Detection [78.59426158981108]
We introduce a bi-directional LiDAR-Radar fusion framework, termed Bi-LRFusion, to tackle the challenges and improve 3D detection for dynamic objects.
We conduct extensive experiments on nuScenes and ORR datasets, and show that our Bi-LRFusion achieves state-of-the-art performance for detecting dynamic objects.
arXiv Detail & Related papers (2023-06-02T10:57:41Z) - R2P: A Deep Learning Model from mmWave Radar to Point Cloud [14.803119281557995]
Radar to Point Cloud (R2P) is a deep learning model that generates smooth, dense, and highly accurate point cloud representation of a 3D object.
R2P replaces Stage 2 of our recently proposed 3DRIMR (3D Reconstruction and Imaging via mmWave Radar) system.
arXiv Detail & Related papers (2022-07-21T18:01:05Z) - Fully Convolutional One-Stage 3D Object Detection on LiDAR Range Images [96.66271207089096]
FCOS-LiDAR is a fully convolutional one-stage 3D object detector for LiDAR point clouds of autonomous driving scenes.
We show that an RV-based 3D detector with standard 2D convolutions alone can achieve comparable performance to state-of-the-art BEV-based detectors.
arXiv Detail & Related papers (2022-05-27T05:42:16Z) - 3DRIMR: 3D Reconstruction and Imaging via mmWave Radar based on Deep
Learning [9.26903816093995]
mmWave radar has been shown as an effective sensing technique in low visibility, smoke, dusty, and dense fog environment.
We propose 3D Reconstruction and Imaging via mmWave Radar (3DRIMR), a deep learning based architecture that reconstructs 3D shape of an object in dense detailed point cloud format.
Our experiments have demonstrated 3DRIMR's effectiveness in reconstructing 3D objects, and its performance improvement over standard techniques.
arXiv Detail & Related papers (2021-08-05T21:24:57Z) - PC-DAN: Point Cloud based Deep Affinity Network for 3D Multi-Object
Tracking (Accepted as an extended abstract in JRDB-ACT Workshop at CVPR21) [68.12101204123422]
A point cloud is a dense compilation of spatial data in 3D coordinates.
We propose a PointNet-based approach for 3D Multi-Object Tracking (MOT)
arXiv Detail & Related papers (2021-06-03T05:36:39Z) - Cross-Modality 3D Object Detection [63.29935886648709]
We present a novel two-stage multi-modal fusion network for 3D object detection.
The whole architecture facilitates two-stage fusion.
Our experiments on the KITTI dataset show that the proposed multi-stage fusion helps the network to learn better representations.
arXiv Detail & Related papers (2020-08-16T11:01:20Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.