HDNet: Hierarchical Dynamic Network for Gait Recognition using
Millimeter-Wave Radar
- URL: http://arxiv.org/abs/2211.00312v1
- Date: Tue, 1 Nov 2022 07:34:22 GMT
- Title: HDNet: Hierarchical Dynamic Network for Gait Recognition using
Millimeter-Wave Radar
- Authors: Yanyan Huang, Yong Wang, Kun Shi, Chaojie Gu, Yu Fu, Cheng Zhuo,
Zhiguo Shi
- Abstract summary: We propose a Hierarchical Dynamic Network (HDNet) for gait recognition using mmWave radar.
To prove the superiority of our methods, we perform extensive experiments on two public mmWave radar-based gait recognition datasets.
- Score: 13.19744551082316
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Gait recognition is widely used in diversified practical applications.
Currently, the most prevalent approach is to recognize human gait from RGB
images, owing to the progress of computer vision technologies. Nevertheless,
the perception capability of RGB cameras deteriorates in rough circumstances,
and visual surveillance may cause privacy invasion. Due to the robustness and
non-invasive feature of millimeter wave (mmWave) radar, radar-based gait
recognition has attracted increasing attention in recent years. In this
research, we propose a Hierarchical Dynamic Network (HDNet) for gait
recognition using mmWave radar. In order to explore more dynamic information,
we propose point flow as a novel point clouds descriptor. We also devise a
dynamic frame sampling module to promote the efficiency of computation without
deteriorating performance noticeably. To prove the superiority of our methods,
we perform extensive experiments on two public mmWave radar-based gait
recognition datasets, and the results demonstrate that our model is superior to
existing state-of-the-art methods.
Related papers
- Redefining Automotive Radar Imaging: A Domain-Informed 1D Deep Learning Approach for High-Resolution and Efficient Performance [6.784861785632841]
Our study redefines radar imaging super-resolution as a one-dimensional (1D) signal super-resolution spectra estimation problem.
Our tailored deep learning network for automotive radar imaging exhibits remarkable scalability, parameter efficiency and fast inference speed.
Our SR-SPECNet sets a new benchmark in producing high-resolution radar range-azimuth images.
arXiv Detail & Related papers (2024-06-11T16:07:08Z) - Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - G3R: Generating Rich and Fine-grained mmWave Radar Data from 2D Videos for Generalized Gesture Recognition [19.95047010486547]
We develop a software pipeline that exploits wealthy 2D videos to generate realistic radar data.
It addresses the challenge of simulating diversified and fine-grained reflection properties of user gestures.
We implement and evaluate G3R using 2D videos from public data sources and self-collected real-world radar data.
arXiv Detail & Related papers (2024-04-23T11:22:59Z) - Radar-Based Recognition of Static Hand Gestures in American Sign
Language [17.021656590925005]
This study explores the efficacy of synthetic data generated by an advanced radar ray-tracing simulator.
The simulator employs an intuitive material model that can be adjusted to introduce data diversity.
Despite exclusively training the NN on synthetic data, it demonstrates promising performance when put to the test with real measurement data.
arXiv Detail & Related papers (2024-02-20T08:19:30Z) - Multi-stage Learning for Radar Pulse Activity Segmentation [51.781832424705094]
Radio signal recognition is a crucial function in electronic warfare.
Precise identification and localisation of radar pulse activities are required by electronic warfare systems.
Deep learning-based radar pulse activity recognition methods have remained largely underexplored.
arXiv Detail & Related papers (2023-12-15T01:56:27Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - DensePose From WiFi [86.61881052177228]
We develop a deep neural network that maps the phase and amplitude of WiFi signals to UV coordinates within 24 human regions.
Our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches.
arXiv Detail & Related papers (2022-12-31T16:48:43Z) - Differentiable Frequency-based Disentanglement for Aerial Video Action
Recognition [56.91538445510214]
We present a learning algorithm for human activity recognition in videos.
Our approach is designed for UAV videos, which are mainly acquired from obliquely placed dynamic cameras.
We conduct extensive experiments on the UAV Human dataset and the NEC Drone dataset.
arXiv Detail & Related papers (2022-09-15T22:16:52Z) - Cross Vision-RF Gait Re-identification with Low-cost RGB-D Cameras and
mmWave Radars [15.662787088335618]
This work studies the problem of cross-modal human re-identification (ReID)
We propose the first-of-its-kind vision-RF system for cross-modal multi-person ReID at the same time.
Our proposed system is able to achieve 92.5% top-1 accuracy and 97.5% top-5 accuracy out of 56 volunteers.
arXiv Detail & Related papers (2022-07-16T10:34:25Z) - Waveform Selection for Radar Tracking in Target Channels With Memory via
Universal Learning [14.796960833031724]
Adapting the radar's waveform using partial information about the state of the scene has been shown to provide performance benefits in many practical scenarios.
This work examines a radar system which builds a compressed model of the radar-environment interface in the form of a context-tree.
The proposed approach is tested in a simulation study, and is shown to provide tracking performance improvements over two state-of-the-art waveform selection schemes.
arXiv Detail & Related papers (2021-08-02T21:27:56Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.