Learning Speed-Adaptive Walking Agent Using Imitation Learning with Physics-Informed Simulation
- URL: http://arxiv.org/abs/2412.03949v1
- Date: Thu, 05 Dec 2024 07:55:58 GMT
- Title: Learning Speed-Adaptive Walking Agent Using Imitation Learning with Physics-Informed Simulation
- Authors: Yi-Hung Chiu, Ung Hee Lee, Changseob Song, Manaen Hu, Inseung Kang,
- Abstract summary: We create a skeletal humanoid agent capable of adapting to varying walking speeds while maintaining biomechanically realistic motions.
The framework combines a synthetic data generator, which produces biomechanically plausible gait kinematics from open-source biomechanics data, and a training system that uses adversarial imitation learning to train the agent's walking policy.
- Score: 0.0
- License:
- Abstract: Virtual models of human gait, or digital twins, offer a promising solution for studying mobility without the need for labor-intensive data collection. However, challenges such as the sim-to-real gap and limited adaptability to diverse walking conditions persist. To address these, we developed and validated a framework to create a skeletal humanoid agent capable of adapting to varying walking speeds while maintaining biomechanically realistic motions. The framework combines a synthetic data generator, which produces biomechanically plausible gait kinematics from open-source biomechanics data, and a training system that uses adversarial imitation learning to train the agent's walking policy. We conducted comprehensive analyses comparing the agent's kinematics, synthetic data, and the original biomechanics dataset. The agent achieved a root mean square error of 5.24 +- 0.09 degrees at varying speeds compared to ground-truth kinematics data, demonstrating its adaptability. This work represents a significant step toward developing a digital twin of human locomotion, with potential applications in biomechanics research, exoskeleton design, and rehabilitation.
Related papers
- Muscles in Time: Learning to Understand Human Motion by Simulating Muscle Activations [64.98299559470503]
Muscles in Time (MinT) is a large-scale synthetic muscle activation dataset.
It contains over nine hours of simulation data covering 227 subjects and 402 simulated muscle strands.
We show results on neural network-based muscle activation estimation from human pose sequences.
arXiv Detail & Related papers (2024-10-31T18:28:53Z) - MS-MANO: Enabling Hand Pose Tracking with Biomechanical Constraints [50.61346764110482]
We integrate a musculoskeletal system with a learnable parametric hand model, MANO, to create MS-MANO.
This model emulates the dynamics of muscles and tendons to drive the skeletal system, imposing physiologically realistic constraints on the resulting torque trajectories.
We also propose a simulation-in-the-loop pose refinement framework, BioPR, that refines the initial estimated pose through a multi-layer perceptron network.
arXiv Detail & Related papers (2024-04-16T02:18:18Z) - 3D Kinematics Estimation from Video with a Biomechanical Model and
Synthetic Training Data [4.130944152992895]
We propose a novel biomechanics-aware network that directly outputs 3D kinematics from two input views.
Our experiments demonstrate that the proposed approach, only trained on synthetic data, outperforms previous state-of-the-art methods.
arXiv Detail & Related papers (2024-02-20T17:33:40Z) - DiffuseBot: Breeding Soft Robots With Physics-Augmented Generative
Diffusion Models [102.13968267347553]
We present DiffuseBot, a physics-augmented diffusion model that generates soft robot morphologies capable of excelling in a wide spectrum of tasks.
We showcase a range of simulated and fabricated robots along with their capabilities.
arXiv Detail & Related papers (2023-11-28T18:58:48Z) - Motion Capture Benchmark of Real Industrial Tasks and Traditional Crafts
for Human Movement Analysis [0.0]
This paper presents seven datasets recorded using inertial-based motion capture.
The datasets contain professional gestures carried out by industrial operators and skilled craftsmen performed in real conditions in-situ.
arXiv Detail & Related papers (2023-04-03T10:29:24Z) - Skeleton2Humanoid: Animating Simulated Characters for
Physically-plausible Motion In-betweening [59.88594294676711]
Modern deep learning based motion synthesis approaches barely consider the physical plausibility of synthesized motions.
We propose a system Skeleton2Humanoid'' which performs physics-oriented motion correction at test time.
Experiments on the challenging LaFAN1 dataset show our system can outperform prior methods significantly in terms of both physical plausibility and accuracy.
arXiv Detail & Related papers (2022-10-09T16:15:34Z) - OstrichRL: A Musculoskeletal Ostrich Simulation to Study Bio-mechanical
Locomotion [8.849771760994273]
We release a 3D musculoskeletal simulation of an ostrich based on the MuJoCo simulator.
The model is based on CT scans and dissections used to gather actual muscle data.
We also provide a set of reinforcement learning tasks, including reference motion tracking and a reaching task with the neck.
arXiv Detail & Related papers (2021-12-11T19:58:11Z) - Domain Adaptive Robotic Gesture Recognition with Unsupervised
Kinematic-Visual Data Alignment [60.31418655784291]
We propose a novel unsupervised domain adaptation framework which can simultaneously transfer multi-modality knowledge, i.e., both kinematic and visual data, from simulator to real robot.
It remedies the domain gap with enhanced transferable features by using temporal cues in videos, and inherent correlations in multi-modal towards recognizing gesture.
Results show that our approach recovers the performance with great improvement gains, up to 12.91% in ACC and 20.16% in F1score without using any annotations in real robot.
arXiv Detail & Related papers (2021-03-06T09:10:03Z) - Predictive Modeling of Periodic Behavior for Human-Robot Symbiotic
Walking [13.68799310875662]
We extend Interaction Primitives to periodic movement regimes, i.e., walking.
We show that this model is particularly well-suited for learning data-driven, customized models of human walking.
We also demonstrate how the same framework can be used to learn controllers for a robotic prosthesis.
arXiv Detail & Related papers (2020-05-27T03:30:48Z) - RoboTHOR: An Open Simulation-to-Real Embodied AI Platform [56.50243383294621]
We introduce RoboTHOR to democratize research in interactive and embodied visual AI.
We show there exists a significant gap between the performance of models trained in simulation when they are tested in both simulations and their carefully constructed physical analogs.
arXiv Detail & Related papers (2020-04-14T20:52:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.