milliFlow: Scene Flow Estimation on mmWave Radar Point Cloud for Human Motion Sensing
- URL: http://arxiv.org/abs/2306.17010v8
- Date: Fri, 12 Jul 2024 18:56:07 GMT
- Title: milliFlow: Scene Flow Estimation on mmWave Radar Point Cloud for Human Motion Sensing
- Authors: Fangqiang Ding, Zhen Luo, Peijun Zhao, Chris Xiaoxuan Lu,
- Abstract summary: mmWave radars have gained popularity due to their privacy-friendly features.
We propose milliFlow, a novel deep learning approach to estimate scene flow as complementary motion information for mmWave point cloud.
- Score: 11.541217629396373
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human motion sensing plays a crucial role in smart systems for decision-making, user interaction, and personalized services. Extensive research that has been conducted is predominantly based on cameras, whose intrusive nature limits their use in smart home applications. To address this, mmWave radars have gained popularity due to their privacy-friendly features. In this work, we propose milliFlow, a novel deep learning approach to estimate scene flow as complementary motion information for mmWave point cloud, serving as an intermediate level of features and directly benefiting downstream human motion sensing tasks. Experimental results demonstrate the superior performance of our method when compared with the competing approaches. Furthermore, by incorporating scene flow information, we achieve remarkable improvements in human activity recognition and human parsing and support human body part tracking. Code and dataset are available at https://github.com/Toytiny/milliFlow.
Related papers
- Integrating Temporal Context into Streaming Data for Human Activity Recognition in Smart Home [3.1032184155196982]
Human Activity Recognition (HAR) from passive sensors mostly relies on traditional machine learning.<n>We tackle this by clustering activities into morning, afternoon, and night.<n>We propose to extend the feature vector by incorporating time of day and day of week as cyclical temporal features.
arXiv Detail & Related papers (2026-01-09T09:47:06Z) - Exploration of Low-Cost but Accurate Radar-Based Human Motion Direction Determination [0.0]
A low-cost but accurate radar-based human motion direction determination (HMDD) method is explored in this paper.<n>The HMDD is implemented through a lightweight and fast Vision Transformer-Convolutional Neural Network hybrid model structure.<n>The effectiveness of the proposed method is verified through open-source dataset.
arXiv Detail & Related papers (2025-07-30T10:48:36Z) - SUPER: Seated Upper Body Pose Estimation using mmWave Radars [6.205521056622584]
Super is a framework for seated upper body human pose estimation that utilizes dual-mmWave radars in close proximity.
A lightweight neural network extracts both global and local features of upper body and output pose parameters for the Skinned Multi-Person Linear (SMPL) model.
arXiv Detail & Related papers (2024-07-02T17:32:34Z) - Aligning Human Motion Generation with Human Perceptions [51.831338643012444]
We propose a data-driven approach to bridge the gap by introducing a large-scale human perceptual evaluation dataset, MotionPercept, and a human motion critic model, MotionCritic.
Our critic model offers a more accurate metric for assessing motion quality and could be readily integrated into the motion generation pipeline.
arXiv Detail & Related papers (2024-07-02T14:01:59Z) - CoNav: A Benchmark for Human-Centered Collaborative Navigation [66.6268966718022]
We propose a collaborative navigation (CoNav) benchmark.
Our CoNav tackles the critical challenge of constructing a 3D navigation environment with realistic and diverse human activities.
We propose an intention-aware agent for reasoning both long-term and short-term human intention.
arXiv Detail & Related papers (2024-06-04T15:44:25Z) - Universal Humanoid Motion Representations for Physics-Based Control [71.46142106079292]
We present a universal motion representation that encompasses a comprehensive range of motor skills for physics-based humanoid control.
We first learn a motion imitator that can imitate all of human motion from a large, unstructured motion dataset.
We then create our motion representation by distilling skills directly from the imitator.
arXiv Detail & Related papers (2023-10-06T20:48:43Z) - MiliPoint: A Point Cloud Dataset for mmWave Radar [12.084565337833792]
Millimetre-wave (mmWave) radar has emerged as an attractive and cost-effective alternative for human activity sensing.
mmWave radars are non-intrusive, providing better protection for user privacy.
However, as a Radio Frequency (RF) based technology, mmWave radars rely on capturing reflected signals from objects, making them more prone to noise compared to cameras.
This raises an intriguing question for the deep learning community: Can we develop more effective point set-based deep learning methods for such attractive sensors?
arXiv Detail & Related papers (2023-09-23T16:32:36Z) - A Real-time Human Pose Estimation Approach for Optimal Sensor Placement
in Sensor-based Human Activity Recognition [63.26015736148707]
This paper introduces a novel methodology to resolve the issue of optimal sensor placement for Human Activity Recognition.
The derived skeleton data provides a unique strategy for identifying the optimal sensor location.
Our findings indicate that the vision-based method for sensor placement offers comparable results to the conventional deep learning approach.
arXiv Detail & Related papers (2023-07-06T10:38:14Z) - Differentiable Frequency-based Disentanglement for Aerial Video Action
Recognition [56.91538445510214]
We present a learning algorithm for human activity recognition in videos.
Our approach is designed for UAV videos, which are mainly acquired from obliquely placed dynamic cameras.
We conduct extensive experiments on the UAV Human dataset and the NEC Drone dataset.
arXiv Detail & Related papers (2022-09-15T22:16:52Z) - GIMO: Gaze-Informed Human Motion Prediction in Context [75.52839760700833]
We propose a large-scale human motion dataset that delivers high-quality body pose sequences, scene scans, and ego-centric views with eye gaze.
Our data collection is not tied to specific scenes, which further boosts the motion dynamics observed from our subjects.
To realize the full potential of gaze, we propose a novel network architecture that enables bidirectional communication between the gaze and motion branches.
arXiv Detail & Related papers (2022-04-20T13:17:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.