Rapid-Motion-Track: Markerless Tracking of Fast Human Motion with Deeper
Learning
- URL: http://arxiv.org/abs/2302.08505v1
- Date: Wed, 18 Jan 2023 22:57:34 GMT
- Title: Rapid-Motion-Track: Markerless Tracking of Fast Human Motion with Deeper
Learning
- Authors: Renjie Li, Chun Yu Lao, Rebecca St. George, Katherine Lawler, Saurabh
Garg, Son N. Tran, Quan Bai, Jane Alty
- Abstract summary: Small deficits in movement are often the first sign of an underlying neurological problem.
We develop a new end-to-end, deep learning-based system, Rapid-Motion-Track (RMT)
RMT can track the fastest human movement accurately when webcams or laptop cameras are used.
- Score: 10.086410807283746
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Objective The coordination of human movement directly reflects function of
the central nervous system. Small deficits in movement are often the first sign
of an underlying neurological problem. The objective of this research is to
develop a new end-to-end, deep learning-based system, Rapid-Motion-Track (RMT)
that can track the fastest human movement accurately when webcams or laptop
cameras are used.
Materials and Methods We applied RMT to finger tapping, a well-validated test
of motor control that is one of the most challenging human motions to track
with computer vision due to the small keypoints of digits and the high
velocities that are generated. We recorded 160 finger tapping assessments
simultaneously with a standard 2D laptop camera (30 frames/sec) and a
high-speed wearable sensor-based 3D motion tracking system (250 frames/sec).
RMT and a range of DLC models were applied to the video data with tapping
frequencies up to 8Hz to extract movement features.
Results The movement features (e.g. speed, rhythm, variance) identified with
the new RMT system exhibited very high concurrent validity with the
gold-standard measurements (97.3\% of RMT measures were within +/-0.5Hz of the
Optotrak measures), and outperformed DLC and other advanced computer vision
tools (around 88.2\% of DLC measures were within +/-0.5Hz of the Optotrak
measures). RMT also accurately tracked a range of other rapid human movements
such as foot tapping, head turning and sit-to -stand movements.
Conclusion: With the ubiquity of video technology in smart devices, the RMT
method holds potential to transform access and accuracy of human movement
assessment.
Related papers
- Ultrafast vision perception by neuromorphic optical flow [1.1980928503177917]
3D neuromorphic optical flow method embeds external motion features directly into hardware.
In our demonstration, this approach reduces visual data processing time by an average of 0.3 seconds.
Neuromorphic optical flow algorithm's flexibility allows seamless integration with existing algorithms.
arXiv Detail & Related papers (2024-09-10T10:59:32Z) - Motion-Guided Dual-Camera Tracker for Endoscope Tracking and Motion Analysis in a Mechanical Gastric Simulator [5.073179848641095]
The motion-guided dual-camera vision tracker is proposed to provide robust and accurate tracking of the endoscope tip's 3D position.
The proposed tracker achieves superior performance against state-of-the-art vision trackers, achieving 42% and 72% improvements against the second-best method in average error and maximum error.
arXiv Detail & Related papers (2024-03-08T08:31:46Z) - MotionTrack: Learning Motion Predictor for Multiple Object Tracking [68.68339102749358]
We introduce a novel motion-based tracker, MotionTrack, centered around a learnable motion predictor.
Our experimental results demonstrate that MotionTrack yields state-of-the-art performance on datasets such as Dancetrack and SportsMOT.
arXiv Detail & Related papers (2023-06-05T04:24:11Z) - FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing [71.76084256567599]
We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL)
Our system, FastRLAP (faster lap), trains autonomously in the real world, without human interventions, and without requiring any simulation or expert demonstrations.
The resulting policies exhibit emergent aggressive driving skills, such as timing braking and acceleration around turns and avoiding areas which impede the robot's motion, approaching the performance of a human driver using a similar first-person interface over the course of training.
arXiv Detail & Related papers (2023-04-19T17:33:47Z) - Deep learning-based approaches for human motion decoding in smart
walkers for rehabilitation [3.8791511769387634]
Smart walkers should be able to decode human motion and needs, as early as possible.
Current walkers decode motion intention using information of wearable or embedded sensors.
A contactless approach is proposed, addressing human motion decoding as an early action recognition/detection problematic.
arXiv Detail & Related papers (2023-01-13T14:29:44Z) - QuestSim: Human Motion Tracking from Sparse Sensors with Simulated
Avatars [80.05743236282564]
Real-time tracking of human body motion is crucial for immersive experiences in AR/VR.
We present a reinforcement learning framework that takes in sparse signals from an HMD and two controllers.
We show that a single policy can be robust to diverse locomotion styles, different body sizes, and novel environments.
arXiv Detail & Related papers (2022-09-20T00:25:54Z) - D&D: Learning Human Dynamics from Dynamic Camera [55.60512353465175]
We present D&D (Learning Human Dynamics from Dynamic Camera), which leverages the laws of physics to reconstruct 3D human motion from the in-the-wild videos with a moving camera.
Our approach is entirely neural-based and runs without offline optimization or simulation in physics engines.
arXiv Detail & Related papers (2022-09-19T06:51:02Z) - Neural Monocular 3D Human Motion Capture with Physical Awareness [76.55971509794598]
We present a new trainable system for physically plausible markerless 3D human motion capture.
Unlike most neural methods for human motion capture, our approach is aware of physical and environmental constraints.
It produces smooth and physically principled 3D motions in an interactive frame rate in a wide variety of challenging scenes.
arXiv Detail & Related papers (2021-05-03T17:57:07Z) - Contact and Human Dynamics from Monocular Video [73.47466545178396]
Existing deep models predict 2D and 3D kinematic poses from video that are approximately accurate, but contain visible errors.
We present a physics-based method for inferring 3D human motion from video sequences that takes initial 2D and 3D pose estimates as input.
arXiv Detail & Related papers (2020-07-22T21:09:11Z) - When We First Met: Visual-Inertial Person Localization for Co-Robot
Rendezvous [29.922954461039698]
We propose a method to learn a visual-inertial feature space in which the motion of a person in video can be easily matched to the motion measured by a wearable inertial measurement unit (IMU)
Our proposed method is able to accurately localize a target person with 80.7% accuracy using only 5 seconds of IMU data and video.
arXiv Detail & Related papers (2020-06-17T16:15:01Z) - A Time-Delay Feedback Neural Network for Discriminating Small,
Fast-Moving Targets in Complex Dynamic Environments [8.645725394832969]
Discriminating small moving objects within complex visual environments is a significant challenge for autonomous micro robots.
We propose an STMD-based neural network with feedback connection (Feedback STMD), where the network output is temporally delayed, then fed back to the lower layers to mediate neural responses.
arXiv Detail & Related papers (2019-12-29T03:10:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.