Robust Contact State Estimation in Humanoid Walking Gaits
- URL: http://arxiv.org/abs/2208.00278v1
- Date: Sat, 30 Jul 2022 17:19:47 GMT
- Title: Robust Contact State Estimation in Humanoid Walking Gaits
- Authors: Stylianos Piperakis, Michael Maravgakis, Dimitrios Kanoulas, and Panos
Trahanias
- Abstract summary: We propose a deep learning framework that provides a unified approach to the problem of leg contact detection in humanoid robot walking gaits.
Our formulation accomplishes to accurately and robustly estimate the contact state probability for each leg.
Our implementation is offered as an open-source ROS/Python package, coined Legged Contact Detection (LCD)
- Score: 3.1866319932300953
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this article, we propose a deep learning framework that provides a unified
approach to the problem of leg contact detection in humanoid robot walking
gaits. Our formulation accomplishes to accurately and robustly estimate the
contact state probability for each leg (i.e., stable or slip/no contact). The
proposed framework employs solely proprioceptive sensing and although it relies
on simulated ground-truth contact data for the classification process, we
demonstrate that it generalizes across varying friction surfaces and different
legged robotic platforms and, at the same time, is readily transferred from
simulation to practice. The framework is quantitatively and qualitatively
assessed in simulation via the use of ground-truth contact data and is
contrasted against state of-the-art methods with an ATLAS, a NAO, and a TALOS
humanoid robot. Furthermore, its efficacy is demonstrated in base estimation
with a real TALOS humanoid. To reinforce further research endeavors, our
implementation is offered as an open-source ROS/Python package, coined Legged
Contact Detection (LCD).
Related papers
- Learning Speed-Adaptive Walking Agent Using Imitation Learning with Physics-Informed Simulation [0.0]
We create a skeletal humanoid agent capable of adapting to varying walking speeds while maintaining biomechanically realistic motions.
The framework combines a synthetic data generator, which produces biomechanically plausible gait kinematics from open-source biomechanics data, and a training system that uses adversarial imitation learning to train the agent's walking policy.
arXiv Detail & Related papers (2024-12-05T07:55:58Z) - Chatting Up Attachment: Using LLMs to Predict Adult Bonds [0.0]
We use GPT-4 and Claude 3 Opus to create agents that simulate adults with varying profiles, childhood memories, and attachment styles.
We evaluate our models using a transcript dataset from 9 humans who underwent the same interview protocol, analyzed and labeled by mental health professionals.
Our findings indicate that training the models using only synthetic data achieves performance comparable to training the models on human data.
arXiv Detail & Related papers (2024-08-31T04:29:19Z) - RPMArt: Towards Robust Perception and Manipulation for Articulated Objects [56.73978941406907]
We propose a framework towards Robust Perception and Manipulation for Articulated Objects ( RPMArt)
RPMArt learns to estimate the articulation parameters and manipulate the articulation part from the noisy point cloud.
We introduce an articulation-aware classification scheme to enhance its ability for sim-to-real transfer.
arXiv Detail & Related papers (2024-03-24T05:55:39Z) - SynH2R: Synthesizing Hand-Object Motions for Learning Human-to-Robot Handovers [35.386426373890615]
Vision-based human-to-robot handover is an important and challenging task in human-robot interaction.
We introduce a framework that can generate plausible human grasping motions suitable for training the robot.
This allows us to generate synthetic training and testing data with 100x more objects than previous work.
arXiv Detail & Related papers (2023-11-09T18:57:02Z) - Enhanced Human-Robot Collaboration using Constrained Probabilistic
Human-Motion Prediction [5.501477817904299]
We propose a novel human motion prediction framework that incorporates human joint constraints and scene constraints.
It is tested on a human arm kinematic model and implemented on a human-robot collaborative setup with a UR5 robot arm.
arXiv Detail & Related papers (2023-10-05T05:12:14Z) - Tactile Estimation of Extrinsic Contact Patch for Stable Placement [64.06243248525823]
We present the design of feedback skills for robots that must learn to stack complex-shaped objects on top of each other.
We estimate the contact patch between a grasped object and its environment using force and tactile observations.
arXiv Detail & Related papers (2023-09-25T21:51:48Z) - Towards Precise Model-free Robotic Grasping with Sim-to-Real Transfer
Learning [11.470950882435927]
We present an end-to-end robotic grasping network with a grasp.
In physical robotic experiments, our grasping framework grasped single known objects and novel complex-shaped household objects with a success rate of 90.91%.
The proposed grasping framework outperformed two state-of-the-art methods in both known and unknown object robotic grasping.
arXiv Detail & Related papers (2023-01-28T16:57:19Z) - Nonprehensile Riemannian Motion Predictive Control [57.295751294224765]
We introduce a novel Real-to-Sim reward analysis technique to reliably imagine and predict the outcome of taking possible actions for a real robotic platform.
We produce a closed-loop controller to reactively push objects in a continuous action space.
We observe that RMPC is robust in cluttered as well as occluded environments and outperforms the baselines.
arXiv Detail & Related papers (2021-11-15T18:50:04Z) - Skeleton-Based Mutually Assisted Interacted Object Localization and
Human Action Recognition [111.87412719773889]
We propose a joint learning framework for "interacted object localization" and "human action recognition" based on skeleton data.
Our method achieves the best or competitive performance with the state-of-the-art methods for human action recognition.
arXiv Detail & Related papers (2021-10-28T10:09:34Z) - TRiPOD: Human Trajectory and Pose Dynamics Forecasting in the Wild [77.59069361196404]
TRiPOD is a novel method for predicting body dynamics based on graph attentional networks.
To incorporate a real-world challenge, we learn an indicator representing whether an estimated body joint is visible/invisible at each frame.
Our evaluation shows that TRiPOD outperforms all prior work and state-of-the-art specifically designed for each of the trajectory and pose forecasting tasks.
arXiv Detail & Related papers (2021-04-08T20:01:00Z) - Domain Adaptive Robotic Gesture Recognition with Unsupervised
Kinematic-Visual Data Alignment [60.31418655784291]
We propose a novel unsupervised domain adaptation framework which can simultaneously transfer multi-modality knowledge, i.e., both kinematic and visual data, from simulator to real robot.
It remedies the domain gap with enhanced transferable features by using temporal cues in videos, and inherent correlations in multi-modal towards recognizing gesture.
Results show that our approach recovers the performance with great improvement gains, up to 12.91% in ACC and 20.16% in F1score without using any annotations in real robot.
arXiv Detail & Related papers (2021-03-06T09:10:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.