Enabling Visual Recognition at Radio Frequency
- URL: http://arxiv.org/abs/2405.19516v1
- Date: Wed, 29 May 2024 20:52:59 GMT
- Title: Enabling Visual Recognition at Radio Frequency
- Authors: Haowen Lai, Gaoxiang Luo, Yifei Liu, Mingmin Zhao,
- Abstract summary: PanoRadar is a novel RF imaging system that brings RF resolution close to that of LiDAR.
Results enable, for the first time, a variety of visual recognition tasks at radio frequency.
Our results demonstrate PanoRadar's robust performance across 12 buildings.
- Score: 13.399148413043411
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces PanoRadar, a novel RF imaging system that brings RF resolution close to that of LiDAR, while providing resilience against conditions challenging for optical signals. Our LiDAR-comparable 3D imaging results enable, for the first time, a variety of visual recognition tasks at radio frequency, including surface normal estimation, semantic segmentation, and object detection. PanoRadar utilizes a rotating single-chip mmWave radar, along with a combination of novel signal processing and machine learning algorithms, to create high-resolution 3D images of the surroundings. Our system accurately estimates robot motion, allowing for coherent imaging through a dense grid of synthetic antennas. It also exploits the high azimuth resolution to enhance elevation resolution using learning-based methods. Furthermore, PanoRadar tackles 3D learning via 2D convolutions and addresses challenges due to the unique characteristics of RF signals. Our results demonstrate PanoRadar's robust performance across 12 buildings.
Related papers
- Redefining Automotive Radar Imaging: A Domain-Informed 1D Deep Learning Approach for High-Resolution and Efficient Performance [6.784861785632841]
Our study redefines radar imaging super-resolution as a one-dimensional (1D) signal super-resolution spectra estimation problem.
Our tailored deep learning network for automotive radar imaging exhibits remarkable scalability, parameter efficiency and fast inference speed.
Our SR-SPECNet sets a new benchmark in producing high-resolution radar range-azimuth images.
arXiv Detail & Related papers (2024-06-11T16:07:08Z) - G3R: Generating Rich and Fine-grained mmWave Radar Data from 2D Videos for Generalized Gesture Recognition [19.95047010486547]
We develop a software pipeline that exploits wealthy 2D videos to generate realistic radar data.
It addresses the challenge of simulating diversified and fine-grained reflection properties of user gestures.
We implement and evaluate G3R using 2D videos from public data sources and self-collected real-world radar data.
arXiv Detail & Related papers (2024-04-23T11:22:59Z) - CVT-xRF: Contrastive In-Voxel Transformer for 3D Consistent Radiance Fields from Sparse Inputs [65.80187860906115]
We propose a novel approach to improve NeRF's performance with sparse inputs.
We first adopt a voxel-based ray sampling strategy to ensure that the sampled rays intersect with a certain voxel in 3D space.
We then randomly sample additional points within the voxel and apply a Transformer to infer the properties of other points on each ray, which are then incorporated into the volume rendering.
arXiv Detail & Related papers (2024-03-25T15:56:17Z) - DART: Implicit Doppler Tomography for Radar Novel View Synthesis [9.26298115522881]
DART is a Neural Radiance Field-inspired method which uses radar-specific physics to create a reflectance and transmittance-based rendering pipeline for range-Doppler images.
In comparison to state-of-the-art baselines, DART synthesizes superior radar range-Doppler images from novel views across all datasets.
arXiv Detail & Related papers (2024-03-06T17:54:50Z) - Multi-Space Neural Radiance Fields [74.46513422075438]
Existing Neural Radiance Fields (NeRF) methods suffer from the existence of reflective objects.
We propose a multi-space neural radiance field (MS-NeRF) that represents the scene using a group of feature fields in parallel sub-spaces.
Our approach significantly outperforms the existing single-space NeRF methods for rendering high-quality scenes.
arXiv Detail & Related papers (2023-05-07T13:11:07Z) - Efficient CNN-based Super Resolution Algorithms for mmWave Mobile Radar
Imaging [2.3623206450285457]
We introduce an innovative super resolution approach to emerging modes of near-field synthetic aperture radar (SAR) imaging.
Recent research extends convolutional neural network (CNN) architectures to achieve super resolution on images generated from radar signaling.
We propose a novel CNN architecture to achieve SAR image super-resolution for mobile applications by employing state-of-the-art SAR processing and deep learning techniques.
arXiv Detail & Related papers (2023-05-03T12:54:28Z) - NeRF-SR: High-Quality Neural Radiance Fields using Super-Sampling [82.99453001445478]
We present NeRF-SR, a solution for high-resolution (HR) novel view synthesis with mostly low-resolution (LR) inputs.
Our method is built upon Neural Radiance Fields (NeRF) that predicts per-point density and color with a multi-layer perceptron.
arXiv Detail & Related papers (2021-12-03T07:33:47Z) - Urban Radiance Fields [77.43604458481637]
We perform 3D reconstruction and novel view synthesis from data captured by scanning platforms commonly deployed for world mapping in urban outdoor environments.
Our approach extends Neural Radiance Fields, which has been demonstrated to synthesize realistic novel images for small scenes in controlled settings.
Each of these three extensions provides significant performance improvements in experiments on Street View data.
arXiv Detail & Related papers (2021-11-29T15:58:16Z) - Depth-supervised NeRF: Fewer Views and Faster Training for Free [66.16386801362643]
DS-NeRF is a loss for learning neural radiance fields that takes advantage of readily-available depth supervision.
We find that DS-NeRF can render more accurate images given fewer training views while training 2-6x faster.
arXiv Detail & Related papers (2021-07-06T17:58:35Z) - Rethinking of Radar's Role: A Camera-Radar Dataset and Systematic
Annotator via Coordinate Alignment [38.24705460170415]
We propose a new dataset, named CRUW, with a systematic annotator and performance evaluation system.
CRUW aims to classify and localize the objects in 3D purely from radar's radio frequency (RF) images.
To the best of our knowledge, CRUW is the first public large-scale dataset with a systematic annotation and evaluation system.
arXiv Detail & Related papers (2021-05-11T17:13:45Z) - MVSNeRF: Fast Generalizable Radiance Field Reconstruction from
Multi-View Stereo [52.329580781898116]
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.
Unlike prior works on neural radiance fields that consider per-scene optimization on densely captured images, we propose a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference.
arXiv Detail & Related papers (2021-03-29T13:15:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.