Composite Motion Learning with Task Control
- URL: http://arxiv.org/abs/2305.03286v1
- Date: Fri, 5 May 2023 05:02:41 GMT
- Title: Composite Motion Learning with Task Control
- Authors: Pei Xu, Xiumin Shang, Victor Zordan, Ioannis Karamouzas
- Abstract summary: We present a deep learning method for composite and task-driven motion control for physically simulated characters.
We learn decoupled motions for specific body parts from multiple reference motions simultaneously and directly by leveraging the use of multiple discriminators in a GAN-like setup.
We show the applicability of our approach on a variety of challenging multi-objective tasks involving both composite motion imitation and multiple goal-directed control.
- Score: 0.6882042556551609
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a deep learning method for composite and task-driven motion
control for physically simulated characters. In contrast to existing
data-driven approaches using reinforcement learning that imitate full-body
motions, we learn decoupled motions for specific body parts from multiple
reference motions simultaneously and directly by leveraging the use of multiple
discriminators in a GAN-like setup. In this process, there is no need of any
manual work to produce composite reference motions for learning. Instead, the
control policy explores by itself how the composite motions can be combined
automatically. We further account for multiple task-specific rewards and train
a single, multi-objective control policy. To this end, we propose a novel
framework for multi-objective learning that adaptively balances the learning of
disparate motions from multiple sources and multiple goal-directed control
objectives. In addition, as composite motions are typically augmentations of
simpler behaviors, we introduce a sample-efficient method for training
composite control policies in an incremental manner, where we reuse a
pre-trained policy as the meta policy and train a cooperative policy that
adapts the meta one for new composite tasks. We show the applicability of our
approach on a variety of challenging multi-objective tasks involving both
composite motion imitation and multiple goal-directed control.
Related papers
- Guided Reinforcement Learning for Robust Multi-Contact Loco-Manipulation [12.377289165111028]
Reinforcement learning (RL) often necessitates a meticulous Markov Decision Process (MDP) design tailored to each task.
This work proposes a systematic approach to behavior synthesis and control for multi-contact loco-manipulation tasks.
We define a task-independent MDP to train RL policies using only a single demonstration per task generated from a model-based trajectory.
arXiv Detail & Related papers (2024-10-17T17:46:27Z) - CLAS: Coordinating Multi-Robot Manipulation with Central Latent Action
Spaces [9.578169216444813]
This paper proposes an approach to coordinating multi-robot manipulation through learned latent action spaces that are shared across different agents.
We validate our method in simulated multi-robot manipulation tasks and demonstrate improvement over previous baselines in terms of sample efficiency and learning performance.
arXiv Detail & Related papers (2022-11-28T23:20:47Z) - Inferring Versatile Behavior from Demonstrations by Matching Geometric
Descriptors [72.62423312645953]
Humans intuitively solve tasks in versatile ways, varying their behavior in terms of trajectory-based planning and for individual steps.
Current Imitation Learning algorithms often only consider unimodal expert demonstrations and act in a state-action-based setting.
Instead, we combine a mixture of movement primitives with a distribution matching objective to learn versatile behaviors that match the expert's behavior and versatility.
arXiv Detail & Related papers (2022-10-17T16:42:59Z) - Modular Adaptive Policy Selection for Multi-Task Imitation Learning
through Task Division [60.232542918414985]
Multi-task learning often suffers from negative transfer, sharing information that should be task-specific.
This is done by using proto-policies as modules to divide the tasks into simple sub-behaviours that can be shared.
We also demonstrate its ability to autonomously divide the tasks into both shared and task-specific sub-behaviours.
arXiv Detail & Related papers (2022-03-28T15:53:17Z) - Multi-Task Learning with Sequence-Conditioned Transporter Networks [67.57293592529517]
We aim to solve multi-task learning through the lens of sequence-conditioning and weighted sampling.
We propose a new suite of benchmark aimed at compositional tasks, MultiRavens, which allows defining custom task combinations.
Second, we propose a vision-based end-to-end system architecture, Sequence-Conditioned Transporter Networks, which augments Goal-Conditioned Transporter Networks with sequence-conditioning and weighted sampling.
arXiv Detail & Related papers (2021-09-15T21:19:11Z) - A GAN-Like Approach for Physics-Based Imitation Learning and Interactive
Character Control [2.2082422928825136]
We present a simple and intuitive approach for interactive control of physically simulated characters.
Our work builds upon generative adversarial networks (GAN) and reinforcement learning.
We highlight the applicability of our approach in a range of imitation and interactive control tasks.
arXiv Detail & Related papers (2021-05-21T00:03:29Z) - Learning Multi-Arm Manipulation Through Collaborative Teleoperation [63.35924708783826]
Imitation Learning (IL) is a powerful paradigm to teach robots to perform manipulation tasks.
Many real-world tasks require multiple arms, such as lifting a heavy object or assembling a desk.
We present Multi-Arm RoboTurk (MART), a multi-user data collection platform that allows multiple remote users to simultaneously teleoperate a set of robotic arms.
arXiv Detail & Related papers (2020-12-12T05:43:43Z) - Learning to Compose Hierarchical Object-Centric Controllers for Robotic
Manipulation [26.24940293693809]
We propose using reinforcement learning to compose hierarchical object-centric controllers for manipulation tasks.
Experiments in both simulation and real world show how the proposed approach leads to improved sample efficiency, zero-shot generalization, and simulation-to-reality transfer without fine-tuning.
arXiv Detail & Related papers (2020-11-09T18:38:29Z) - Behavior Priors for Efficient Reinforcement Learning [97.81587970962232]
We consider how information and architectural constraints can be combined with ideas from the probabilistic modeling literature to learn behavior priors.
We discuss how such latent variable formulations connect to related work on hierarchical reinforcement learning (HRL) and mutual information and curiosity based objectives.
We demonstrate the effectiveness of our framework by applying it to a range of simulated continuous control domains.
arXiv Detail & Related papers (2020-10-27T13:17:18Z) - ReLMoGen: Leveraging Motion Generation in Reinforcement Learning for
Mobile Manipulation [99.2543521972137]
ReLMoGen is a framework that combines a learned policy to predict subgoals and a motion generator to plan and execute the motion needed to reach these subgoals.
Our method is benchmarked on a diverse set of seven robotics tasks in photo-realistic simulation environments.
ReLMoGen shows outstanding transferability between different motion generators at test time, indicating a great potential to transfer to real robots.
arXiv Detail & Related papers (2020-08-18T08:05:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.