Occlusion-Robust Multi-Sensory Posture Estimation in Physical
Human-Robot Interaction
- URL: http://arxiv.org/abs/2208.06494v1
- Date: Fri, 12 Aug 2022 20:41:09 GMT
- Title: Occlusion-Robust Multi-Sensory Posture Estimation in Physical
Human-Robot Interaction
- Authors: Amir Yazdani, Roya Sabbagh Novin, Andrew Merryweather, Tucker Hermans
- Abstract summary: 2D postures from OpenPose over a single camera, and the trajectory of the interacting robot while the human performs a task.
We use 2D postures from OpenPose over a single camera, and the trajectory of the interacting robot while the human performs a task.
We show that our multi-sensory system resolves human kinematic redundancy better than posture estimation solely using OpenPose or posture estimation solely using the robot's trajectory.
- Score: 10.063075560468798
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D posture estimation is important in analyzing and improving ergonomics in
physical human-robot interaction and reducing the risk of musculoskeletal
disorders. Vision-based posture estimation approaches are prone to sensor and
model errors, as well as occlusion, while posture estimation solely from the
interacting robot's trajectory suffers from ambiguous solutions. To benefit
from the advantages of both approaches and improve upon their drawbacks, we
introduce a low-cost, non-intrusive, and occlusion-robust multi-sensory 3D
postural estimation algorithm in physical human-robot interaction. We use 2D
postures from OpenPose over a single camera, and the trajectory of the
interacting robot while the human performs a task. We model the problem as a
partially-observable dynamical system and we infer the 3D posture via a
particle filter. We present our work in teleoperation, but it can be
generalized to other applications of physical human-robot interaction. We show
that our multi-sensory system resolves human kinematic redundancy better than
posture estimation solely using OpenPose or posture estimation solely using the
robot's trajectory. This will increase the accuracy of estimated postures
compared to the gold-standard motion capture postures. Moreover, our approach
also performs better than other single sensory methods when postural assessment
using RULA assessment tool.
Related papers
- Kinematics-based 3D Human-Object Interaction Reconstruction from Single View [10.684643503514849]
Existing methods simply predict the body poses merely rely on network training on some indoor datasets.
We propose a kinematics-based method that can drive the joints of human body to the human-object contact regions accurately.
arXiv Detail & Related papers (2024-07-19T05:44:35Z) - Hybrid 3D Human Pose Estimation with Monocular Video and Sparse IMUs [15.017274891943162]
Temporal 3D human pose estimation from monocular videos is a challenging task in human-centered computer vision.
Inertial sensor has been introduced to provide complementary source of information.
It remains challenging to integrate heterogeneous sensor data for producing physically rational 3D human poses.
arXiv Detail & Related papers (2024-04-27T09:02:42Z) - Exploring 3D Human Pose Estimation and Forecasting from the Robot's Perspective: The HARPER Dataset [52.22758311559]
We introduce HARPER, a novel dataset for 3D body pose estimation and forecast in dyadic interactions between users and Spot.
The key-novelty is the focus on the robot's perspective, i.e., on the data captured by the robot's sensors.
The scenario underlying HARPER includes 15 actions, of which 10 involve physical contact between the robot and users.
arXiv Detail & Related papers (2024-03-21T14:53:50Z) - External Camera-based Mobile Robot Pose Estimation for Collaborative
Perception with Smart Edge Sensors [22.5939915003931]
We present an approach for estimating a mobile robot's pose w.r.t. the allocentric coordinates of a network of static cameras using multi-view RGB images.
The images are processed online, locally on smart edge sensors by deep neural networks to detect the robot.
With the robot's pose precisely estimated, its observations can be fused into the allocentric scene model.
arXiv Detail & Related papers (2023-03-07T11:03:33Z) - Pose-Oriented Transformer with Uncertainty-Guided Refinement for
2D-to-3D Human Pose Estimation [51.00725889172323]
We propose a Pose-Oriented Transformer (POT) with uncertainty guided refinement for 3D human pose estimation.
We first develop novel pose-oriented self-attention mechanism and distance-related position embedding for POT to explicitly exploit the human skeleton topology.
We present an Uncertainty-Guided Refinement Network (UGRN) to refine pose predictions from POT, especially for the difficult joints.
arXiv Detail & Related papers (2023-02-15T00:22:02Z) - Human keypoint detection for close proximity human-robot interaction [29.99153271571971]
We study the performance of state-of-the-art human keypoint detectors in the context of close proximity human-robot interaction.
The best performing whole-body keypoint detectors in close proximity were MMPose and AlphaPose, but both had difficulty with finger detection.
We propose a combination of MMPose or AlphaPose for the body and MediaPipe for the hands in a single framework providing the most accurate and robust detection.
arXiv Detail & Related papers (2022-07-15T20:33:29Z) - Ergonomically Intelligent Physical Human-Robot Interaction: Postural
Estimation, Assessment, and Optimization [3.681892767755111]
We show that we can estimate human posture solely from the trajectory of the interacting robot.
We propose DULA, a differentiable ergonomics model, and use it in gradient-free postural optimization for physical human-robot interaction tasks.
arXiv Detail & Related papers (2021-08-12T21:13:06Z) - Neural Monocular 3D Human Motion Capture with Physical Awareness [76.55971509794598]
We present a new trainable system for physically plausible markerless 3D human motion capture.
Unlike most neural methods for human motion capture, our approach is aware of physical and environmental constraints.
It produces smooth and physically principled 3D motions in an interactive frame rate in a wide variety of challenging scenes.
arXiv Detail & Related papers (2021-05-03T17:57:07Z) - Human POSEitioning System (HPS): 3D Human Pose Estimation and
Self-localization in Large Scenes from Body-Mounted Sensors [71.29186299435423]
We introduce (HPS) Human POSEitioning System, a method to recover the full 3D pose of a human registered with a 3D scan of the surrounding environment.
We show that our optimization-based integration exploits the benefits of the two, resulting in pose accuracy free of drift.
HPS could be used for VR/AR applications where humans interact with the scene without requiring direct line of sight with an external camera.
arXiv Detail & Related papers (2021-03-31T17:58:31Z) - Online Body Schema Adaptation through Cost-Sensitive Active Learning [63.84207660737483]
The work was implemented in a simulation environment, using the 7DoF arm of the iCub robot simulator.
A cost-sensitive active learning approach is used to select optimal joint configurations.
The results show cost-sensitive active learning has similar accuracy to the standard active learning approach, while reducing in about half the executed movement.
arXiv Detail & Related papers (2021-01-26T16:01:02Z) - Perceiving Humans: from Monocular 3D Localization to Social Distancing [93.03056743850141]
We present a new cost-effective vision-based method that perceives humans' locations in 3D and their body orientation from a single image.
We show that it is possible to rethink the concept of "social distancing" as a form of social interaction in contrast to a simple location-based rule.
arXiv Detail & Related papers (2020-09-01T10:12:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.