Exploration of Low-Cost but Accurate Radar-Based Human Motion Direction Determination
- URL: http://arxiv.org/abs/2507.22567v2
- Date: Mon, 04 Aug 2025 02:35:43 GMT
- Title: Exploration of Low-Cost but Accurate Radar-Based Human Motion Direction Determination
- Authors: Weicheng Gao,
- Abstract summary: A low-cost but accurate radar-based human motion direction determination (HMDD) method is explored in this paper.<n>The HMDD is implemented through a lightweight and fast Vision Transformer-Convolutional Neural Network hybrid model structure.<n>The effectiveness of the proposed method is verified through open-source dataset.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work is completed on a whim after discussions with my junior colleague. The motion direction angle affects the micro-Doppler spectrum width, thus determining the human motion direction can provide important prior information for downstream tasks such as gait recognition. However, Doppler-Time map (DTM)-based methods still have room for improvement in achieving feature augmentation and motion determination simultaneously. In response, a low-cost but accurate radar-based human motion direction determination (HMDD) method is explored in this paper. In detail, the radar-based human gait DTMs are first generated, and then the feature augmentation is achieved using feature linking model. Subsequently, the HMDD is implemented through a lightweight and fast Vision Transformer-Convolutional Neural Network hybrid model structure. The effectiveness of the proposed method is verified through open-source dataset. The open-source code of this work is released at: https://github.com/JoeyBGOfficial/Low-Cost-Accurate-Radar-Based-Human-Motion-Direction-Determination .
Related papers
- Through-the-Wall Radar Human Activity Recognition WITHOUT Using Neural Networks [0.0]
I would like to try to return to the original path by attempting to eschew neural networks to achieve the TWR HAR task.<n>The micro-Doppler segmentation feature is discretized into a two-dimensional point cloud.<n>The effectiveness of the proposed method is demonstrated by numerical simulated and measured experiments.
arXiv Detail & Related papers (2025-06-05T15:45:08Z) - TacoDepth: Towards Efficient Radar-Camera Depth Estimation with One-stage Fusion [54.46664104437454]
We propose TacoDepth, an efficient and accurate Radar-Camera depth estimation model with one-stage fusion.<n>Specifically, the graph-based Radar structure extractor and the pyramid-based Radar fusion module are designed.<n>Compared with the previous state-of-the-art approach, TacoDepth improves depth accuracy and processing speed by 12.8% and 91.8%.
arXiv Detail & Related papers (2025-04-16T05:25:04Z) - Generalizable Indoor Human Activity Recognition Method Based on Micro-Doppler Corner Point Cloud and Dynamic Graph Learning [12.032590125621155]
Through-the-wall radar (TWR) human activity recognition can be achieved by fusing micro-Doppler signature extraction and intelligent decision-making algorithms.
This paper proposes a generalizable indoor human activity recognition method based on micro-Doppler corner point cloud and dynamic graph learning.
arXiv Detail & Related papers (2024-10-10T02:24:07Z) - Through-the-Wall Radar Human Activity Micro-Doppler Signature Representation Method Based on Joint Boulic-Sinusoidal Pendulum Model [22.320147097092416]
This paper proposes a human activity micro-Doppler signature representation method based on joint Boulic-sinusoidal pendulum motion model.
The paper also calculates the minimum number of key points needed to describe the Doppler and micro-Doppler information sufficiently.
arXiv Detail & Related papers (2024-08-22T02:33:29Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - UnLoc: A Universal Localization Method for Autonomous Vehicles using
LiDAR, Radar and/or Camera Input [51.150605800173366]
UnLoc is a novel unified neural modeling approach for localization with multi-sensor input in all weather conditions.
Our method is extensively evaluated on Oxford Radar RobotCar, ApolloSouthBay and Perth-WA datasets.
arXiv Detail & Related papers (2023-07-03T04:10:55Z) - HuPR: A Benchmark for Human Pose Estimation Using Millimeter Wave Radar [30.51398364813315]
This paper introduces a novel human pose estimation benchmark, Human Pose with Millimeter Wave Radar (HuPR)
This dataset is created using cross-calibrated mmWave radar sensors and a monocular RGB camera for cross-modality training of radar-based human pose estimation.
arXiv Detail & Related papers (2022-10-22T22:28:40Z) - Boosting 3D Object Detection by Simulating Multimodality on Point Clouds [51.87740119160152]
This paper presents a new approach to boost a single-modality (LiDAR) 3D object detector by teaching it to simulate features and responses that follow a multi-modality (LiDAR-image) detector.
The approach needs LiDAR-image data only when training the single-modality detector, and once well-trained, it only needs LiDAR data at inference.
Experimental results on the nuScenes dataset show that our approach outperforms all SOTA LiDAR-only 3D detectors.
arXiv Detail & Related papers (2022-06-30T01:44:30Z) - LiDARCap: Long-range Marker-less 3D Human Motion Capture with LiDAR
Point Clouds [58.402752909624716]
Existing motion capture datasets are largely short-range and cannot yet fit the need of long-range applications.
We propose LiDARHuman26M, a new human motion capture dataset captured by LiDAR at a much longer range to overcome this limitation.
Our dataset also includes the ground truth human motions acquired by the IMU system and the synchronous RGB images.
arXiv Detail & Related papers (2022-03-28T12:52:45Z) - CubeLearn: End-to-end Learning for Human Motion Recognition from Raw
mmWave Radar Signals [40.53874877651099]
mmWave FMCW radar has attracted huge amount of research interest for human-centered applications in recent years.
Most existing pipelines are built upon conventional DFT pre-processing and deep neural network hybrid methods.
We propose a learnable pre-processing module, named CubeLearn, to directly extract features from raw radar signal.
arXiv Detail & Related papers (2021-11-07T00:45:51Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.