Rapid-Motion-Track: Markerless Tracking of Fast Human Motion with Deeper
Learning
- URL: http://arxiv.org/abs/2302.08505v1
- Date: Wed, 18 Jan 2023 22:57:34 GMT
- Title: Rapid-Motion-Track: Markerless Tracking of Fast Human Motion with Deeper
Learning
- Authors: Renjie Li, Chun Yu Lao, Rebecca St. George, Katherine Lawler, Saurabh
Garg, Son N. Tran, Quan Bai, Jane Alty
- Abstract summary: Small deficits in movement are often the first sign of an underlying neurological problem.
We develop a new end-to-end, deep learning-based system, Rapid-Motion-Track (RMT)
RMT can track the fastest human movement accurately when webcams or laptop cameras are used.
- Score: 10.086410807283746
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Objective The coordination of human movement directly reflects function of
the central nervous system. Small deficits in movement are often the first sign
of an underlying neurological problem. The objective of this research is to
develop a new end-to-end, deep learning-based system, Rapid-Motion-Track (RMT)
that can track the fastest human movement accurately when webcams or laptop
cameras are used.
Materials and Methods We applied RMT to finger tapping, a well-validated test
of motor control that is one of the most challenging human motions to track
with computer vision due to the small keypoints of digits and the high
velocities that are generated. We recorded 160 finger tapping assessments
simultaneously with a standard 2D laptop camera (30 frames/sec) and a
high-speed wearable sensor-based 3D motion tracking system (250 frames/sec).
RMT and a range of DLC models were applied to the video data with tapping
frequencies up to 8Hz to extract movement features.
Results The movement features (e.g. speed, rhythm, variance) identified with
the new RMT system exhibited very high concurrent validity with the
gold-standard measurements (97.3\% of RMT measures were within +/-0.5Hz of the
Optotrak measures), and outperformed DLC and other advanced computer vision
tools (around 88.2\% of DLC measures were within +/-0.5Hz of the Optotrak
measures). RMT also accurately tracked a range of other rapid human movements
such as foot tapping, head turning and sit-to -stand movements.
Conclusion: With the ubiquity of video technology in smart devices, the RMT
method holds potential to transform access and accuracy of human movement
assessment.
Related papers
- MI-DETR: A Strong Baseline for Moving Infrared Small Target Detection with Bio-Inspired Motion Integration [63.87179575890912]
We propose Motion Integration DETR (MI-DETR), a bio-inspired dual-pathway detector for infrared small target detection.<n>First, a retina-inspired cellular automaton (RCA) converts raw frame sequences into a motion map defined on the same pixel grid as the appearance image.<n>Second, a Parvocellular-Magnocellular Interconnection (PMI) Block facilitates bidirectional feature interaction between the two pathways.
arXiv Detail & Related papers (2026-03-05T11:39:31Z) - A Machine Learning-Based Multimodal Framework for Wearable Sensor-Based Archery Action Recognition and Stress Estimation [21.9818193435855]
Motion analysis systems are often expensive and intrusive, limiting their use in natural training environments.<n>We propose a machine learning-based framework that integrates wearable sensor data for simultaneous action recognition and stress estimation.
arXiv Detail & Related papers (2025-11-18T02:16:33Z) - SONIC: Supersizing Motion Tracking for Natural Humanoid Whole-Body Control [85.91101551600978]
We show that scaling up model capacity, data, and compute yields a generalist humanoid controller capable of creating natural and robust whole-body movements.<n>We build a foundation model for motion tracking by scaling along three axes: network size, dataset volume, and compute.<n>We show the practical utility of our model through two mechanisms: (1) a real-time universal kinematic planner that bridges motion tracking to downstream task execution, enabling natural and interactive control, and (2) a unified token space that supports various motion input interfaces.
arXiv Detail & Related papers (2025-11-11T04:37:40Z) - MOTION: ML-Assisted On-Device Low-Latency Motion Recognition [5.0385144315892925]
We use WeBe Band, a multi-sensor wearable device equipped with a powerful enough MCU to effectively perform gesture recognition entirely on the device.<n>We found that the neural network provided the best balance between accuracy, latency, and memory use.<n>Our results also demonstrate that reliable real-time gesture recognition can be achieved in WeBe Band, with great potential for real-time medical monitoring solutions.
arXiv Detail & Related papers (2025-10-14T01:15:47Z) - ResMimic: From General Motion Tracking to Humanoid Whole-body Loco-Manipulation via Residual Learning [59.64325421657381]
Humanoid whole-body loco-manipulation promises transformative capabilities for daily service and warehouse tasks.<n>We introduce ResMimic, a two-stage residual learning framework for precise and expressive humanoid control from human motion data.<n>Results show substantial gains in task success, training efficiency, and robustness over strong baselines.
arXiv Detail & Related papers (2025-10-06T17:47:02Z) - Taccel: Scaling Up Vision-based Tactile Robotics via High-performance GPU Simulation [50.34179054785646]
We present Taccel, a high-performance simulation platform that integrates IPC and ABD to model robots, tactile sensors, and objects with both accuracy and unprecedented speed.
Taccel provides precise physics simulation and realistic tactile signals while supporting flexible robot-sensor configurations through user-friendly APIs.
These capabilities position Taccel as a powerful tool for scaling up tactile robotics research and development.
arXiv Detail & Related papers (2025-04-17T12:57:11Z) - Capturing complex hand movements and object interactions using machine learning-powered stretchable smart textile gloves [9.838013581109681]
Real-time tracking of dexterous hand movements has numerous applications in human-computer interaction, metaverse, robotics, and tele-health.
Here, we report accurate and dynamic tracking of articulated hand and finger movements using stretchable, washable smart gloves with embedded helical sensor yarns and inertial measurement units.
The sensor yarns have a high dynamic range, responding to low 0.005 % to high 155 % strains, and show stability during extensive use and washing cycles.
arXiv Detail & Related papers (2024-10-03T05:32:16Z) - Ultrafast vision perception by neuromorphic optical flow [1.1980928503177917]
3D neuromorphic optical flow method embeds external motion features directly into hardware.
In our demonstration, this approach reduces visual data processing time by an average of 0.3 seconds.
Neuromorphic optical flow algorithm's flexibility allows seamless integration with existing algorithms.
arXiv Detail & Related papers (2024-09-10T10:59:32Z) - Motion-Guided Dual-Camera Tracker for Endoscope Tracking and Motion Analysis in a Mechanical Gastric Simulator [5.073179848641095]
The motion-guided dual-camera vision tracker is proposed to provide robust and accurate tracking of the endoscope tip's 3D position.
The proposed tracker achieves superior performance against state-of-the-art vision trackers, achieving 42% and 72% improvements against the second-best method in average error and maximum error.
arXiv Detail & Related papers (2024-03-08T08:31:46Z) - MotionTrack: Learning Motion Predictor for Multiple Object Tracking [68.68339102749358]
We introduce a novel motion-based tracker, MotionTrack, centered around a learnable motion predictor.
Our experimental results demonstrate that MotionTrack yields state-of-the-art performance on datasets such as Dancetrack and SportsMOT.
arXiv Detail & Related papers (2023-06-05T04:24:11Z) - FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing [71.76084256567599]
We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL)
Our system, FastRLAP (faster lap), trains autonomously in the real world, without human interventions, and without requiring any simulation or expert demonstrations.
The resulting policies exhibit emergent aggressive driving skills, such as timing braking and acceleration around turns and avoiding areas which impede the robot's motion, approaching the performance of a human driver using a similar first-person interface over the course of training.
arXiv Detail & Related papers (2023-04-19T17:33:47Z) - Deep learning-based approaches for human motion decoding in smart
walkers for rehabilitation [3.8791511769387634]
Smart walkers should be able to decode human motion and needs, as early as possible.
Current walkers decode motion intention using information of wearable or embedded sensors.
A contactless approach is proposed, addressing human motion decoding as an early action recognition/detection problematic.
arXiv Detail & Related papers (2023-01-13T14:29:44Z) - QuestSim: Human Motion Tracking from Sparse Sensors with Simulated
Avatars [80.05743236282564]
Real-time tracking of human body motion is crucial for immersive experiences in AR/VR.
We present a reinforcement learning framework that takes in sparse signals from an HMD and two controllers.
We show that a single policy can be robust to diverse locomotion styles, different body sizes, and novel environments.
arXiv Detail & Related papers (2022-09-20T00:25:54Z) - D&D: Learning Human Dynamics from Dynamic Camera [55.60512353465175]
We present D&D (Learning Human Dynamics from Dynamic Camera), which leverages the laws of physics to reconstruct 3D human motion from the in-the-wild videos with a moving camera.
Our approach is entirely neural-based and runs without offline optimization or simulation in physics engines.
arXiv Detail & Related papers (2022-09-19T06:51:02Z) - Neural Monocular 3D Human Motion Capture with Physical Awareness [76.55971509794598]
We present a new trainable system for physically plausible markerless 3D human motion capture.
Unlike most neural methods for human motion capture, our approach is aware of physical and environmental constraints.
It produces smooth and physically principled 3D motions in an interactive frame rate in a wide variety of challenging scenes.
arXiv Detail & Related papers (2021-05-03T17:57:07Z) - Contact and Human Dynamics from Monocular Video [73.47466545178396]
Existing deep models predict 2D and 3D kinematic poses from video that are approximately accurate, but contain visible errors.
We present a physics-based method for inferring 3D human motion from video sequences that takes initial 2D and 3D pose estimates as input.
arXiv Detail & Related papers (2020-07-22T21:09:11Z) - When We First Met: Visual-Inertial Person Localization for Co-Robot
Rendezvous [29.922954461039698]
We propose a method to learn a visual-inertial feature space in which the motion of a person in video can be easily matched to the motion measured by a wearable inertial measurement unit (IMU)
Our proposed method is able to accurately localize a target person with 80.7% accuracy using only 5 seconds of IMU data and video.
arXiv Detail & Related papers (2020-06-17T16:15:01Z) - A Time-Delay Feedback Neural Network for Discriminating Small,
Fast-Moving Targets in Complex Dynamic Environments [8.645725394832969]
Discriminating small moving objects within complex visual environments is a significant challenge for autonomous micro robots.
We propose an STMD-based neural network with feedback connection (Feedback STMD), where the network output is temporally delayed, then fed back to the lower layers to mediate neural responses.
arXiv Detail & Related papers (2019-12-29T03:10:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.