mm-Wave Radar Hand Shape Classification Using Deformable Transformers
- URL: http://arxiv.org/abs/2210.13079v1
- Date: Mon, 24 Oct 2022 09:56:11 GMT
- Title: mm-Wave Radar Hand Shape Classification Using Deformable Transformers
- Authors: Athmanarayanan Lakshmi Narayanan, Asma Beevi K. T, Haoyang Wu, Jingyi
Ma, W. Margaret Huang
- Abstract summary: A novel, real-time, mm-Wave radar-based static hand shape classification algorithm and implementation are proposed.
The method finds several applications in low cost and privacy sensitive touchless control technology using 60 Ghz radar as the sensor input.
- Score: 0.46007387171990594
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: A novel, real-time, mm-Wave radar-based static hand shape classification
algorithm and implementation are proposed. The method finds several
applications in low cost and privacy sensitive touchless control technology
using 60 Ghz radar as the sensor input. As opposed to prior Range-Doppler image
based 2D classification solutions, our method converts raw radar data to 3D
sparse cartesian point clouds.The demonstrated 3D radar neural network model
using deformable transformers significantly surpasses the performance results
set by prior methods which either utilize custom signal processing or apply
generic convolutional techniques on Range-Doppler FFT images. Experiments are
performed on an internally collected dataset using an off-the-shelf radar
sensor.
Related papers
- RobuRCDet: Enhancing Robustness of Radar-Camera Fusion in Bird's Eye View for 3D Object Detection [68.99784784185019]
Poor lighting or adverse weather conditions degrade camera performance.
Radar suffers from noise and positional ambiguity.
We propose RobuRCDet, a robust object detection model in BEV.
arXiv Detail & Related papers (2025-02-18T17:17:38Z) - TransRAD: Retentive Vision Transformer for Enhanced Radar Object Detection [6.163747364795787]
We present TransRAD, a novel 3D radar object detection model.
We propose Location-Aware NMS to mitigate the common issue of duplicate bounding boxes in deep radar object detection.
Results demonstrate that TransRAD outperforms state-of-the-art methods in both 2D and 3D radar detection tasks.
arXiv Detail & Related papers (2025-01-29T20:21:41Z) - Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - Diffusion-Based Point Cloud Super-Resolution for mmWave Radar Data [8.552647576661174]
millimeter-wave radar sensor maintains stable performance under adverse environmental conditions.
Radar point clouds are relatively sparse and contain massive ghost points.
We propose a novel point cloud super-resolution approach for 3D mmWave radar data, named Radar-diffusion.
arXiv Detail & Related papers (2024-04-09T04:41:05Z) - DART: Implicit Doppler Tomography for Radar Novel View Synthesis [9.26298115522881]
DART is a Neural Radiance Field-inspired method which uses radar-specific physics to create a reflectance and transmittance-based rendering pipeline for range-Doppler images.
In comparison to state-of-the-art baselines, DART synthesizes superior radar range-Doppler images from novel views across all datasets.
arXiv Detail & Related papers (2024-03-06T17:54:50Z) - Differentiable Radio Frequency Ray Tracing for Millimeter-Wave Sensing [29.352303349003165]
We propose DiffSBR, a differentiable framework for mmWave-based 3D reconstruction.
DiffSBR incorporates a differentiable ray tracing engine to simulate radar point clouds from virtual 3D models.
Experiments using various radar hardware validate DiffSBR's capability for fine-grained 3D reconstruction.
arXiv Detail & Related papers (2023-11-22T06:13:39Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - UnLoc: A Universal Localization Method for Autonomous Vehicles using
LiDAR, Radar and/or Camera Input [51.150605800173366]
UnLoc is a novel unified neural modeling approach for localization with multi-sensor input in all weather conditions.
Our method is extensively evaluated on Oxford Radar RobotCar, ApolloSouthBay and Perth-WA datasets.
arXiv Detail & Related papers (2023-07-03T04:10:55Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - T-FFTRadNet: Object Detection with Swin Vision Transformers from Raw ADC
Radar Signals [0.0]
Object detection utilizing Frequency Modulated Continous Wave radar is becoming increasingly popular in the field of autonomous systems.
Radar does not possess the same drawbacks seen by other emission-based sensors such as LiDAR, primarily the degradation or loss of return signals due to weather conditions such as rain or snow.
We introduce hierarchical Swin Vision transformers to the field of radar object detection and show their capability to operate on inputs varying in pre-processing, along with different radar configurations.
arXiv Detail & Related papers (2023-03-29T18:04:19Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.