Adaptive Tracking of a Single-Rigid-Body Character in Various
Environments
- URL: http://arxiv.org/abs/2308.07491v3
- Date: Sun, 28 Jan 2024 14:07:01 GMT
- Title: Adaptive Tracking of a Single-Rigid-Body Character in Various
Environments
- Authors: Taesoo Kwon, Taehong Gu, Jaewon Ahn, Yoonsang Lee
- Abstract summary: We propose a deep reinforcement learning method based on the simulation of a single-rigid-body character.
Using the centroidal dynamics model (CDM) to express the full-body character as a single rigid body (SRB) and training a policy to track a reference motion, we can obtain a policy capable of adapting to various unobserved environmental changes.
We demonstrate that our policy, efficiently trained within 30 minutes on an ultraportable laptop, has the ability to cope with environments that have not been experienced during learning.
- Score: 2.048226951354646
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Since the introduction of DeepMimic [Peng et al. 2018], subsequent research
has focused on expanding the repertoire of simulated motions across various
scenarios. In this study, we propose an alternative approach for this goal, a
deep reinforcement learning method based on the simulation of a
single-rigid-body character. Using the centroidal dynamics model (CDM) to
express the full-body character as a single rigid body (SRB) and training a
policy to track a reference motion, we can obtain a policy that is capable of
adapting to various unobserved environmental changes and controller transitions
without requiring any additional learning. Due to the reduced dimension of
state and action space, the learning process is sample-efficient. The final
full-body motion is kinematically generated in a physically plausible way,
based on the state of the simulated SRB character. The SRB simulation is
formulated as a quadratic programming (QP) problem, and the policy outputs an
action that allows the SRB character to follow the reference motion. We
demonstrate that our policy, efficiently trained within 30 minutes on an
ultraportable laptop, has the ability to cope with environments that have not
been experienced during learning, such as running on uneven terrain or pushing
a box, and transitions between learned policies, without any additional
learning.
Related papers
- Autonomous Vehicle Controllers From End-to-End Differentiable Simulation [60.05963742334746]
We propose a differentiable simulator and design an analytic policy gradients (APG) approach to training AV controllers.
Our proposed framework brings the differentiable simulator into an end-to-end training loop, where gradients of environment dynamics serve as a useful prior to help the agent learn a more grounded policy.
We find significant improvements in performance and robustness to noise in the dynamics, as well as overall more intuitive human-like handling.
arXiv Detail & Related papers (2024-09-12T11:50:06Z) - Robust Visual Sim-to-Real Transfer for Robotic Manipulation [79.66851068682779]
Learning visuomotor policies in simulation is much safer and cheaper than in the real world.
However, due to discrepancies between the simulated and real data, simulator-trained policies often fail when transferred to real robots.
One common approach to bridge the visual sim-to-real domain gap is domain randomization (DR)
arXiv Detail & Related papers (2023-07-28T05:47:24Z) - Zero-shot Sim2Real Adaptation Across Environments [45.44896435487879]
We propose a Reverse Action Transformation (RAT) policy which learns to imitate simulated policies in the real-world.
RAT can then be deployed on top of a Universal Policy Network to achieve zero-shot adaptation to new environments.
arXiv Detail & Related papers (2023-02-08T11:59:07Z) - DeXtreme: Transfer of Agile In-hand Manipulation from Simulation to
Reality [64.51295032956118]
We train a policy that can perform robust dexterous manipulation on an anthropomorphic robot hand.
Our work reaffirms the possibilities of sim-to-real transfer for dexterous manipulation in diverse kinds of hardware and simulator setups.
arXiv Detail & Related papers (2022-10-25T01:51:36Z) - A Survey on Reinforcement Learning Methods in Character Animation [22.3342752080749]
Reinforcement Learning is an area of Machine Learning focused on how agents can be trained to make sequential decisions.
This paper surveys the modern Deep Reinforcement Learning methods and discusses their possible applications in Character Animation.
arXiv Detail & Related papers (2022-03-07T23:39:00Z) - Learning Robust Policy against Disturbance in Transition Dynamics via
State-Conservative Policy Optimization [63.75188254377202]
Deep reinforcement learning algorithms can perform poorly in real-world tasks due to discrepancy between source and target environments.
We propose a novel model-free actor-critic algorithm to learn robust policies without modeling the disturbance in advance.
Experiments in several robot control tasks demonstrate that SCPO learns robust policies against the disturbance in transition dynamics.
arXiv Detail & Related papers (2021-12-20T13:13:05Z) - Nonprehensile Riemannian Motion Predictive Control [57.295751294224765]
We introduce a novel Real-to-Sim reward analysis technique to reliably imagine and predict the outcome of taking possible actions for a real robotic platform.
We produce a closed-loop controller to reactively push objects in a continuous action space.
We observe that RMPC is robust in cluttered as well as occluded environments and outperforms the baselines.
arXiv Detail & Related papers (2021-11-15T18:50:04Z) - Error-Aware Policy Learning: Zero-Shot Generalization in Partially
Observable Dynamic Environments [18.8481771211768]
We introduce a novel approach to tackle such a sim-to-real problem by developing policies capable of adapting to new environments.
Key to our approach is an error-aware policy (EAP) that is explicitly made aware of the effect of unobservable factors during training.
We show that a trained EAP for a hip-torque assistive device can be transferred to different human agents with unseen biomechanical characteristics.
arXiv Detail & Related papers (2021-03-13T15:36:44Z) - Deep Reinforcement Learning amidst Lifelong Non-Stationarity [67.24635298387624]
We show that an off-policy RL algorithm can reason about and tackle lifelong non-stationarity.
Our method leverages latent variable models to learn a representation of the environment from current and past experiences.
We also introduce several simulation environments that exhibit lifelong non-stationarity, and empirically find that our approach substantially outperforms approaches that do not reason about environment shift.
arXiv Detail & Related papers (2020-06-18T17:34:50Z) - Sim2Real Transfer for Reinforcement Learning without Dynamics
Randomization [0.0]
We show how to use the Operational Space Control framework (OSC) under joint and cartesian constraints for reinforcement learning in cartesian space.
Our method is able to learn fast and with adjustable degrees of freedom, while we are able to transfer policies without additional dynamics randomizations.
arXiv Detail & Related papers (2020-02-19T11:10:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.