Computer-Vision-Enabled Worker Video Analysis for Motion Amount Quantification
- URL: http://arxiv.org/abs/2405.13999v2
- Date: Tue, 19 Nov 2024 07:45:30 GMT
- Title: Computer-Vision-Enabled Worker Video Analysis for Motion Amount Quantification
- Authors: Hari Iyer, Neel Macwan, Shenghan Guo, Heejin Jeong,
- Abstract summary: This paper introduces a novel framework for tracking and quantifying upper and lower limb motions.
Using joint position data from posture estimation, the framework employs Hotelling's $T2$ statistic to quantify and monitor motion amounts.
The results indicate that the correlation between workers' joint motion amounts and Hotelling's $T2$ statistic is approximately 35% higher for micro-tasks than macro-tasks.
- Score: 2.7523980737007414
- License:
- Abstract: The performance of physical workers is significantly influenced by the extent of their motions. However, monitoring and assessing these motions remains a challenge. Recent advancements have enabled in-situ video analysis for real-time observation of worker behaviors. This paper introduces a novel framework for tracking and quantifying upper and lower limb motions, issuing alerts when critical thresholds are reached. Using joint position data from posture estimation, the framework employs Hotelling's $T^2$ statistic to quantify and monitor motion amounts. The results indicate that the correlation between workers' joint motion amounts and Hotelling's $T^2$ statistic is approximately 35\% higher for micro-tasks than macro-tasks, demonstrating the framework's ability to detect fine-grained motion differences. This study highlights the proposed system's effectiveness in real-time applications across various industry settings, providing a valuable tool for precision motion analysis and proactive ergonomic adjustments.
Related papers
- DeTra: A Unified Model for Object Detection and Trajectory Forecasting [68.85128937305697]
Our approach formulates the union of the two tasks as a trajectory refinement problem.
To tackle this unified task, we design a refinement transformer that infers the presence, pose, and multi-modal future behaviors of objects.
In our experiments, we observe that ourmodel outperforms the state-of-the-art on Argoverse 2 Sensor and Open dataset.
arXiv Detail & Related papers (2024-06-06T18:12:04Z) - What Makes Pre-Trained Visual Representations Successful for Robust
Manipulation? [57.92924256181857]
We find that visual representations designed for manipulation and control tasks do not necessarily generalize under subtle changes in lighting and scene texture.
We find that emergent segmentation ability is a strong predictor of out-of-distribution generalization among ViT models.
arXiv Detail & Related papers (2023-11-03T18:09:08Z) - Motion-Scenario Decoupling for Rat-Aware Video Position Prediction:
Strategy and Benchmark [49.58762201363483]
We introduce RatPose, a bio-robot motion prediction dataset constructed by considering the influence factors of individuals and environments.
We propose a Dual-stream Motion-Scenario Decoupling framework that effectively separates scenario-oriented and motion-oriented features.
We demonstrate significant performance improvements of the proposed textitDMSD framework on different difficulty-level tasks.
arXiv Detail & Related papers (2023-05-17T14:14:31Z) - Motion Capture Benchmark of Real Industrial Tasks and Traditional Crafts
for Human Movement Analysis [0.0]
This paper presents seven datasets recorded using inertial-based motion capture.
The datasets contain professional gestures carried out by industrial operators and skilled craftsmen performed in real conditions in-situ.
arXiv Detail & Related papers (2023-04-03T10:29:24Z) - Mutual Information-Based Temporal Difference Learning for Human Pose
Estimation in Video [16.32910684198013]
We present a novel multi-frame human pose estimation framework, which employs temporal differences across frames to model dynamic contexts.
To be specific, we design a multi-stage entangled learning sequences conditioned on multi-stage differences to derive informative motion representation sequences.
These place us to rank No.1 in the Crowd Pose Estimation in Complex Events Challenge on benchmark HiEve.
arXiv Detail & Related papers (2023-03-15T09:29:03Z) - HumanMAC: Masked Motion Completion for Human Motion Prediction [62.279925754717674]
Human motion prediction is a classical problem in computer vision and computer graphics.
Previous effects achieve great empirical performance based on an encoding-decoding style.
In this paper, we propose a novel framework from a new perspective.
arXiv Detail & Related papers (2023-02-07T18:34:59Z) - Few-shot human motion prediction for heterogeneous sensors [5.210197476419621]
We introduce the first few-shot motion approach that explicitly incorporates the spatial graph.
We show that our model can perform on par with the best approach so far when evaluating on tasks with a fixed output space.
arXiv Detail & Related papers (2022-12-22T15:06:24Z) - Task Formulation Matters When Learning Continually: A Case Study in
Visual Question Answering [58.82325933356066]
Continual learning aims to train a model incrementally on a sequence of tasks without forgetting previous knowledge.
We present a detailed study of how different settings affect performance for Visual Question Answering.
arXiv Detail & Related papers (2022-09-30T19:12:58Z) - Understanding reinforcement learned crowds [9.358303424584902]
Reinforcement Learning methods are used to animate virtual agents.
It is not obvious what is their real impact, and how they affect the results.
We analyze some of these arbitrary choices in terms of their impact on the learning performance.
arXiv Detail & Related papers (2022-09-19T20:47:49Z) - Self-Regulated Learning for Egocentric Video Activity Anticipation [147.9783215348252]
Self-Regulated Learning (SRL) aims to regulate the intermediate representation consecutively to produce representation that emphasizes the novel information in the frame of the current time-stamp.
SRL sharply outperforms existing state-of-the-art in most cases on two egocentric video datasets and two third-person video datasets.
arXiv Detail & Related papers (2021-11-23T03:29:18Z) - Data Science for Motion and Time Analysis with Modern Motion Sensor Data [14.105132549564873]
The motion-and-time analysis has been a popular research topic in operations research.
It is regaining attention as continuous improvement tools for lean manufacturing and smart factory.
This paper develops a framework for data-driven analysis of work motions and studies their correlations to work speeds or execution rates.
arXiv Detail & Related papers (2020-08-25T02:33:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.