Computer-Vision-Enabled Worker Video Analysis for Motion Amount Quantification
- URL: http://arxiv.org/abs/2405.13999v1
- Date: Wed, 22 May 2024 21:15:03 GMT
- Title: Computer-Vision-Enabled Worker Video Analysis for Motion Amount Quantification
- Authors: Hari Iyer, Neel Macwan, Shenghan Guo, Heejin Jeong,
- Abstract summary: This paper introduces a novel framework based on computer vision to track and quantify the motion of workers' upper and lower limbs.
Using joint position data from posture estimation, the framework employs Hotelling's T$2$ statistic to quantify and monitor motion amounts.
It provides a tool for enhancing worker safety and productivity through precision motion analysis and proactive ergonomic adjustments.
- Score: 2.7523980737007414
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The performance of physical workers is significantly influenced by the quantity of their motions. However, monitoring and assessing these motions is challenging due to the complexities of motion sensing, tracking, and quantification. Recent advancements have utilized in-situ video analysis for real-time observation of worker behaviors, enabling data-driven quantification of motion amounts. Nevertheless, there are limitations to monitoring worker movements using video data. This paper introduces a novel framework based on computer vision to track and quantify the motion of workers' upper and lower limbs, issuing alerts when the motion reaches critical thresholds. Using joint position data from posture estimation, the framework employs Hotelling's T$^2$ statistic to quantify and monitor motion amounts, integrating computer vision tools to address challenges in automated worker training and enhance exploratory research in this field. We collected data of participants performing lifting and moving tasks with large boxes and small wooden cubes, to simulate macro and micro assembly tasks respectively. It was found that the correlation between workers' joint motion amount and the Hotelling's T$^2$ statistic was approximately 35% greater for micro tasks compared to macro tasks, highlighting the framework's ability to identify fine-grained motion differences. This study demonstrates the effectiveness of the proposed system in real-time applications across various industry settings. It provides a tool for enhancing worker safety and productivity through precision motion analysis and proactive ergonomic adjustments.
Related papers
- Event-Based Tracking Any Point with Motion-Augmented Temporal Consistency [58.719310295870024]
This paper presents an event-based framework for tracking any point.
It tackles the challenges posed by spatial sparsity and motion sensitivity in events.
It achieves 150% faster processing with competitive model parameters.
arXiv Detail & Related papers (2024-12-02T09:13:29Z) - DeTra: A Unified Model for Object Detection and Trajectory Forecasting [68.85128937305697]
Our approach formulates the union of the two tasks as a trajectory refinement problem.
To tackle this unified task, we design a refinement transformer that infers the presence, pose, and multi-modal future behaviors of objects.
In our experiments, we observe that ourmodel outperforms the state-of-the-art on Argoverse 2 Sensor and Open dataset.
arXiv Detail & Related papers (2024-06-06T18:12:04Z) - What Makes Pre-Trained Visual Representations Successful for Robust
Manipulation? [57.92924256181857]
We find that visual representations designed for manipulation and control tasks do not necessarily generalize under subtle changes in lighting and scene texture.
We find that emergent segmentation ability is a strong predictor of out-of-distribution generalization among ViT models.
arXiv Detail & Related papers (2023-11-03T18:09:08Z) - Motion-Scenario Decoupling for Rat-Aware Video Position Prediction:
Strategy and Benchmark [49.58762201363483]
We introduce RatPose, a bio-robot motion prediction dataset constructed by considering the influence factors of individuals and environments.
We propose a Dual-stream Motion-Scenario Decoupling framework that effectively separates scenario-oriented and motion-oriented features.
We demonstrate significant performance improvements of the proposed textitDMSD framework on different difficulty-level tasks.
arXiv Detail & Related papers (2023-05-17T14:14:31Z) - Motion Capture Benchmark of Real Industrial Tasks and Traditional Crafts
for Human Movement Analysis [0.0]
This paper presents seven datasets recorded using inertial-based motion capture.
The datasets contain professional gestures carried out by industrial operators and skilled craftsmen performed in real conditions in-situ.
arXiv Detail & Related papers (2023-04-03T10:29:24Z) - Mutual Information-Based Temporal Difference Learning for Human Pose
Estimation in Video [16.32910684198013]
We present a novel multi-frame human pose estimation framework, which employs temporal differences across frames to model dynamic contexts.
To be specific, we design a multi-stage entangled learning sequences conditioned on multi-stage differences to derive informative motion representation sequences.
These place us to rank No.1 in the Crowd Pose Estimation in Complex Events Challenge on benchmark HiEve.
arXiv Detail & Related papers (2023-03-15T09:29:03Z) - HumanMAC: Masked Motion Completion for Human Motion Prediction [62.279925754717674]
Human motion prediction is a classical problem in computer vision and computer graphics.
Previous effects achieve great empirical performance based on an encoding-decoding style.
In this paper, we propose a novel framework from a new perspective.
arXiv Detail & Related papers (2023-02-07T18:34:59Z) - Few-shot human motion prediction for heterogeneous sensors [5.210197476419621]
We introduce the first few-shot motion approach that explicitly incorporates the spatial graph.
We show that our model can perform on par with the best approach so far when evaluating on tasks with a fixed output space.
arXiv Detail & Related papers (2022-12-22T15:06:24Z) - Task Formulation Matters When Learning Continually: A Case Study in
Visual Question Answering [58.82325933356066]
Continual learning aims to train a model incrementally on a sequence of tasks without forgetting previous knowledge.
We present a detailed study of how different settings affect performance for Visual Question Answering.
arXiv Detail & Related papers (2022-09-30T19:12:58Z) - Understanding reinforcement learned crowds [9.358303424584902]
Reinforcement Learning methods are used to animate virtual agents.
It is not obvious what is their real impact, and how they affect the results.
We analyze some of these arbitrary choices in terms of their impact on the learning performance.
arXiv Detail & Related papers (2022-09-19T20:47:49Z) - Data Science for Motion and Time Analysis with Modern Motion Sensor Data [14.105132549564873]
The motion-and-time analysis has been a popular research topic in operations research.
It is regaining attention as continuous improvement tools for lean manufacturing and smart factory.
This paper develops a framework for data-driven analysis of work motions and studies their correlations to work speeds or execution rates.
arXiv Detail & Related papers (2020-08-25T02:33:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.