mmBody Benchmark: 3D Body Reconstruction Dataset and Analysis for
Millimeter Wave Radar
- URL: http://arxiv.org/abs/2209.05070v3
- Date: Thu, 21 Sep 2023 10:11:03 GMT
- Title: mmBody Benchmark: 3D Body Reconstruction Dataset and Analysis for
Millimeter Wave Radar
- Authors: Anjun Chen, Xiangyu Wang, Shaohao Zhu, Yanxu Li, Jiming Chen, Qi Ye
- Abstract summary: Millimeter Wave (mmWave) Radar is gaining popularity as it can work in adverse environments like smoke, rain, snow, poor lighting, etc.
Prior work has explored the possibility of reconstructing 3D skeletons or meshes from the noisy and sparse mmWave Radar signals.
This dataset consists of synchronized and calibrated mmWave radar point clouds and RGB(D) images in different scenes and skeleton/mesh annotations for humans in the scenes.
- Score: 10.610455816814985
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Millimeter Wave (mmWave) Radar is gaining popularity as it can work in
adverse environments like smoke, rain, snow, poor lighting, etc. Prior work has
explored the possibility of reconstructing 3D skeletons or meshes from the
noisy and sparse mmWave Radar signals. However, it is unclear how accurately we
can reconstruct the 3D body from the mmWave signals across scenes and how it
performs compared with cameras, which are important aspects needed to be
considered when either using mmWave radars alone or combining them with
cameras. To answer these questions, an automatic 3D body annotation system is
first designed and built up with multiple sensors to collect a large-scale
dataset. The dataset consists of synchronized and calibrated mmWave radar point
clouds and RGB(D) images in different scenes and skeleton/mesh annotations for
humans in the scenes. With this dataset, we train state-of-the-art methods with
inputs from different sensors and test them in various scenarios. The results
demonstrate that 1) despite the noise and sparsity of the generated point
clouds, the mmWave radar can achieve better reconstruction accuracy than the
RGB camera but worse than the depth camera; 2) the reconstruction from the
mmWave radar is affected by adverse weather conditions moderately while the
RGB(D) camera is severely affected. Further, analysis of the dataset and the
results shadow insights on improving the reconstruction from the mmWave radar
and the combination of signals from different sensors.
Related papers
- Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - DIDLM:A Comprehensive Multi-Sensor Dataset with Infrared Cameras, Depth Cameras, LiDAR, and 4D Millimeter-Wave Radar in Challenging Scenarios for 3D Mapping [7.050468075029598]
This study presents a comprehensive multi-sensor dataset designed for 3D mapping in challenging indoor and outdoor environments.
The dataset comprises data from infrared cameras, depth cameras, LiDAR, and 4D millimeter-wave radar.
Various SLAM algorithms are employed to process the dataset, revealing performance differences among algorithms in different scenarios.
arXiv Detail & Related papers (2024-04-15T09:49:33Z) - Differentiable Radio Frequency Ray Tracing for Millimeter-Wave Sensing [29.352303349003165]
We propose DiffSBR, a differentiable framework for mmWave-based 3D reconstruction.
DiffSBR incorporates a differentiable ray tracing engine to simulate radar point clouds from virtual 3D models.
Experiments using various radar hardware validate DiffSBR's capability for fine-grained 3D reconstruction.
arXiv Detail & Related papers (2023-11-22T06:13:39Z) - Vision meets mmWave Radar: 3D Object Perception Benchmark for Autonomous
Driving [30.456314610767667]
We introduce the CRUW3D dataset, including 66K synchronized and well-calibrated camera, radar, and LiDAR frames.
This kind of format can enable machine learning models to more reliable perception results after fusing the information or features between the camera and radar.
arXiv Detail & Related papers (2023-11-17T01:07:37Z) - Diffusion Models for Interferometric Satellite Aperture Radar [73.01013149014865]
Probabilistic Diffusion Models (PDMs) have recently emerged as a very promising class of generative models.
Here, we leverage PDMs to generate several radar-based satellite image datasets.
We show that PDMs succeed in generating images with complex and realistic structures, but that sampling time remains an issue.
arXiv Detail & Related papers (2023-08-31T16:26:17Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - Bi-LRFusion: Bi-Directional LiDAR-Radar Fusion for 3D Dynamic Object
Detection [78.59426158981108]
We introduce a bi-directional LiDAR-Radar fusion framework, termed Bi-LRFusion, to tackle the challenges and improve 3D detection for dynamic objects.
We conduct extensive experiments on nuScenes and ORR datasets, and show that our Bi-LRFusion achieves state-of-the-art performance for detecting dynamic objects.
arXiv Detail & Related papers (2023-06-02T10:57:41Z) - CramNet: Camera-Radar Fusion with Ray-Constrained Cross-Attention for
Robust 3D Object Detection [12.557361522985898]
We propose a camera-radar matching network CramNet to fuse the sensor readings from camera and radar in a joint 3D space.
Our method supports training with sensor modality dropout, which leads to robust 3D object detection, even when a camera or radar sensor suddenly malfunctions on a vehicle.
arXiv Detail & Related papers (2022-10-17T17:18:47Z) - 3DRIMR: 3D Reconstruction and Imaging via mmWave Radar based on Deep
Learning [9.26903816093995]
mmWave radar has been shown as an effective sensing technique in low visibility, smoke, dusty, and dense fog environment.
We propose 3D Reconstruction and Imaging via mmWave Radar (3DRIMR), a deep learning based architecture that reconstructs 3D shape of an object in dense detailed point cloud format.
Our experiments have demonstrated 3DRIMR's effectiveness in reconstructing 3D objects, and its performance improvement over standard techniques.
arXiv Detail & Related papers (2021-08-05T21:24:57Z) - Depth Estimation from Monocular Images and Sparse Radar Data [93.70524512061318]
In this paper, we explore the possibility of achieving a more accurate depth estimation by fusing monocular images and Radar points using a deep neural network.
We find that the noise existing in Radar measurements is one of the main key reasons that prevents one from applying the existing fusion methods.
The experiments are conducted on the nuScenes dataset, which is one of the first datasets which features Camera, Radar, and LiDAR recordings in diverse scenes and weather conditions.
arXiv Detail & Related papers (2020-09-30T19:01:33Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.