Computer-Vision-Enabled Worker Video Analysis for Motion Amount Quantification
- URL: http://arxiv.org/abs/2405.13999v1
- Date: Wed, 22 May 2024 21:15:03 GMT
- Title: Computer-Vision-Enabled Worker Video Analysis for Motion Amount Quantification
- Authors: Hari Iyer, Neel Macwan, Shenghan Guo, Heejin Jeong,
- Abstract summary: This paper introduces a novel framework based on computer vision to track and quantify the motion of workers' upper and lower limbs.
Using joint position data from posture estimation, the framework employs Hotelling's T$2$ statistic to quantify and monitor motion amounts.
It provides a tool for enhancing worker safety and productivity through precision motion analysis and proactive ergonomic adjustments.
- Score: 2.7523980737007414
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The performance of physical workers is significantly influenced by the quantity of their motions. However, monitoring and assessing these motions is challenging due to the complexities of motion sensing, tracking, and quantification. Recent advancements have utilized in-situ video analysis for real-time observation of worker behaviors, enabling data-driven quantification of motion amounts. Nevertheless, there are limitations to monitoring worker movements using video data. This paper introduces a novel framework based on computer vision to track and quantify the motion of workers' upper and lower limbs, issuing alerts when the motion reaches critical thresholds. Using joint position data from posture estimation, the framework employs Hotelling's T$^2$ statistic to quantify and monitor motion amounts, integrating computer vision tools to address challenges in automated worker training and enhance exploratory research in this field. We collected data of participants performing lifting and moving tasks with large boxes and small wooden cubes, to simulate macro and micro assembly tasks respectively. It was found that the correlation between workers' joint motion amount and the Hotelling's T$^2$ statistic was approximately 35% greater for micro tasks compared to macro tasks, highlighting the framework's ability to identify fine-grained motion differences. This study demonstrates the effectiveness of the proposed system in real-time applications across various industry settings. It provides a tool for enhancing worker safety and productivity through precision motion analysis and proactive ergonomic adjustments.
Related papers
- M3Bench: Benchmarking Whole-body Motion Generation for Mobile Manipulation in 3D Scenes [66.44171200767839]
We propose M3Bench, a new benchmark of whole-body motion generation for mobile manipulation tasks.
M3Bench requires an embodied agent to understand its configuration, environmental constraints and task objectives.
M3Bench features 30k object rearrangement tasks across 119 diverse scenes, providing expert demonstrations generated by our newly developed M3BenchMaker.
arXiv Detail & Related papers (2024-10-09T08:38:21Z) - A vision-based framework for human behavior understanding in industrial assembly lines [0.7037008937757392]
This paper introduces a vision-based framework for capturing and understanding human behavior in industrial assembly lines.
The framework leverages advanced computer vision techniques to estimate workers' locations and 3D poses and analyze work postures, actions, and task progress.
A key contribution is the introduction of the CarDA dataset, which contains domain-relevant assembly actions captured in a realistic setting.
arXiv Detail & Related papers (2024-09-25T21:03:13Z) - MobileAIBench: Benchmarking LLMs and LMMs for On-Device Use Cases [81.70591346986582]
We introduce MobileAIBench, a benchmarking framework for evaluating Large Language Models (LLMs) and Large Multimodal Models (LMMs) on mobile devices.
MobileAIBench assesses models across different sizes, quantization levels, and tasks, measuring latency and resource consumption on real devices.
arXiv Detail & Related papers (2024-06-12T22:58:12Z) - Large Motion Model for Unified Multi-Modal Motion Generation [50.56268006354396]
Large Motion Model (LMM) is a motion-centric, multi-modal framework that unifies mainstream motion generation tasks into a generalist model.
LMM tackles these challenges from three principled aspects.
arXiv Detail & Related papers (2024-04-01T17:55:11Z) - Motion Capture Benchmark of Real Industrial Tasks and Traditional Crafts
for Human Movement Analysis [0.0]
This paper presents seven datasets recorded using inertial-based motion capture.
The datasets contain professional gestures carried out by industrial operators and skilled craftsmen performed in real conditions in-situ.
arXiv Detail & Related papers (2023-04-03T10:29:24Z) - Mutual Information-Based Temporal Difference Learning for Human Pose
Estimation in Video [16.32910684198013]
We present a novel multi-frame human pose estimation framework, which employs temporal differences across frames to model dynamic contexts.
To be specific, we design a multi-stage entangled learning sequences conditioned on multi-stage differences to derive informative motion representation sequences.
These place us to rank No.1 in the Crowd Pose Estimation in Complex Events Challenge on benchmark HiEve.
arXiv Detail & Related papers (2023-03-15T09:29:03Z) - Scalable and Real-time Multi-Camera Vehicle Detection,
Re-Identification, and Tracking [58.95210121654722]
We propose a real-time city-scale multi-camera vehicle tracking system that handles real-world, low-resolution CCTV instead of idealized and curated video streams.
Our method is ranked among the top five performers on the public leaderboard.
arXiv Detail & Related papers (2022-04-15T12:47:01Z) - MetaGraspNet: A Large-Scale Benchmark Dataset for Vision-driven Robotic
Grasping via Physics-based Metaverse Synthesis [78.26022688167133]
We present a large-scale benchmark dataset for vision-driven robotic grasping via physics-based metaverse synthesis.
The proposed dataset contains 100,000 images and 25 different object types.
We also propose a new layout-weighted performance metric alongside the dataset for evaluating object detection and segmentation performance.
arXiv Detail & Related papers (2021-12-29T17:23:24Z) - AMP: Adversarial Motion Priors for Stylized Physics-Based Character
Control [145.61135774698002]
We propose a fully automated approach to selecting motion for a character to track in a given scenario.
High-level task objectives that the character should perform can be specified by relatively simple reward functions.
Low-level style of the character's behaviors can be specified by a dataset of unstructured motion clips.
Our system produces high-quality motions comparable to those achieved by state-of-the-art tracking-based techniques.
arXiv Detail & Related papers (2021-04-05T22:43:14Z) - Data Science for Motion and Time Analysis with Modern Motion Sensor Data [14.105132549564873]
The motion-and-time analysis has been a popular research topic in operations research.
It is regaining attention as continuous improvement tools for lean manufacturing and smart factory.
This paper develops a framework for data-driven analysis of work motions and studies their correlations to work speeds or execution rates.
arXiv Detail & Related papers (2020-08-25T02:33:33Z) - Taskology: Utilizing Task Relations at Scale [28.09712466727001]
We show that we can leverage the inherent relationships among collections of tasks, as they are trained jointly.
explicitly utilizing the relationships between tasks allows improving their performance while dramatically reducing the need for labeled data.
We demonstrate our framework on subsets of the following collection of tasks: depth and normal prediction, semantic segmentation, 3D motion and ego-motion estimation, and object tracking and 3D detection in point clouds.
arXiv Detail & Related papers (2020-05-14T22:53:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.