A Comprehensive Review of Skeleton-based Movement Assessment Methods
- URL: http://arxiv.org/abs/2007.10737v3
- Date: Wed, 29 Jul 2020 23:29:30 GMT
- Title: A Comprehensive Review of Skeleton-based Movement Assessment Methods
- Authors: Tal Hakim
- Abstract summary: We review the recent solutions for automatic movement assessment from skeleton videos.
We discuss the status of the research on this topic in a high level.
- Score: 1.2183405753834562
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The raising availability of 3D cameras and dramatic improvement of computer
vision algorithms in the recent decade, accelerated the research of automatic
movement assessment solutions. Such solutions can be implemented at home, using
affordable equipment and dedicated software. In this paper, we divide the
movement assessment task into secondary tasks and explain why they are needed
and how they can be addressed. We review the recent solutions for automatic
movement assessment from skeleton videos, comparing them by their objectives,
features, movement domains and algorithmic approaches. In addition, we discuss
the status of the research on this topic in a high level.
Related papers
- Markerless Multi-view 3D Human Pose Estimation: a survey [0.49157446832511503]
3D human pose estimation aims to reconstruct the human skeleton of all the individuals in a scene by detecting several body joints.
No method is yet capable of solving all the challenges associated with the reconstruction of the 3D pose.
Further research is still required to develop an approach capable of quickly inferring a highly accurate 3D pose with bearable computation cost.
arXiv Detail & Related papers (2024-07-04T10:44:35Z) - Deep Learning-Based Object Pose Estimation: A Comprehensive Survey [73.74933379151419]
We discuss the recent advances in deep learning-based object pose estimation.
Our survey also covers multiple input data modalities, degrees-of-freedom of output poses, object properties, and downstream tasks.
arXiv Detail & Related papers (2024-05-13T14:44:22Z) - Evaluation Framework for Feedback Generation Methods in Skeletal Movement Assessment [0.65268245109828]
We propose terminology and criteria for the classification, evaluation, and comparison of feedback generation solutions.
To our knowledge, this is the first work that formulates feedback generation in skeletal movement assessment.
arXiv Detail & Related papers (2024-04-14T21:14:47Z) - Event-based Simultaneous Localization and Mapping: A Comprehensive Survey [52.73728442921428]
Review of event-based vSLAM algorithms that exploit the benefits of asynchronous and irregular event streams for localization and mapping tasks.
Paper categorizes event-based vSLAM methods into four main categories: feature-based, direct, motion-compensation, and deep learning methods.
arXiv Detail & Related papers (2023-04-19T16:21:14Z) - E^2TAD: An Energy-Efficient Tracking-based Action Detector [78.90585878925545]
This paper presents a tracking-based solution to accurately and efficiently localize predefined key actions.
It won first place in the UAV-Video Track of 2021 Low-Power Computer Vision Challenge (LPCVC)
arXiv Detail & Related papers (2022-04-09T07:52:11Z) - Benchmarking Deep Reinforcement Learning Algorithms for Vision-based
Robotics [11.225021326001778]
This paper presents a benchmarking study of some of the state-of-the-art reinforcement learning algorithms used for solving two vision-based robotics problems.
The performances of these algorithms are compared against PyBullet's two simulation environments known as KukaDiverseObjectEnv and RacecarZEDGymEnv respectively.
arXiv Detail & Related papers (2022-01-11T22:45:25Z) - Attentive and Contrastive Learning for Joint Depth and Motion Field
Estimation [76.58256020932312]
Estimating the motion of the camera together with the 3D structure of the scene from a monocular vision system is a complex task.
We present a self-supervised learning framework for 3D object motion field estimation from monocular videos.
arXiv Detail & Related papers (2021-10-13T16:45:01Z) - Self-supervised Video Object Segmentation by Motion Grouping [79.13206959575228]
We develop a computer vision system able to segment objects by exploiting motion cues.
We introduce a simple variant of the Transformer to segment optical flow frames into primary objects and the background.
We evaluate the proposed architecture on public benchmarks (DAVIS2016, SegTrackv2, and FBMS59)
arXiv Detail & Related papers (2021-04-15T17:59:32Z) - Learning to Segment Rigid Motions from Two Frames [72.14906744113125]
We propose a modular network, motivated by a geometric analysis of what independent object motions can be recovered from an egomotion field.
It takes two consecutive frames as input and predicts segmentation masks for the background and multiple rigidly moving objects, which are then parameterized by 3D rigid transformations.
Our method achieves state-of-the-art performance for rigid motion segmentation on KITTI and Sintel.
arXiv Detail & Related papers (2021-01-11T04:20:30Z) - Attentional Separation-and-Aggregation Network for Self-supervised
Depth-Pose Learning in Dynamic Scenes [19.704284616226552]
Learning depth and ego-motion from unlabeled videos via self-supervision from epipolar projection can improve the robustness and accuracy of the 3D perception and localization of vision-based robots.
However, the rigid projection computed by ego-motion cannot represent all scene points, such as points on moving objects, leading to false guidance in these regions.
We propose an Attentional Separation-and-Aggregation Network (ASANet) which can learn to distinguish and extract the scene's static and dynamic characteristics via the attention mechanism.
arXiv Detail & Related papers (2020-11-18T16:07:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.