InsActor: Instruction-driven Physics-based Characters
- URL: http://arxiv.org/abs/2312.17135v1
- Date: Thu, 28 Dec 2023 17:10:31 GMT
- Title: InsActor: Instruction-driven Physics-based Characters
- Authors: Jiawei Ren, Mingyuan Zhang, Cunjun Yu, Xiao Ma, Liang Pan, Ziwei Liu
- Abstract summary: In this paper, we present a principled generative framework that produces instruction-driven animations of physics-based characters.
Our framework empowers InsActor to capture complex relationships between high-level human instructions and character motions.
InsActor achieves state-of-the-art results on various tasks, including instruction-driven motion generation and instruction-driven waypoint heading.
- Score: 65.4702927454252
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generating animation of physics-based characters with intuitive control has
long been a desirable task with numerous applications. However, generating
physically simulated animations that reflect high-level human instructions
remains a difficult problem due to the complexity of physical environments and
the richness of human language. In this paper, we present InsActor, a
principled generative framework that leverages recent advancements in
diffusion-based human motion models to produce instruction-driven animations of
physics-based characters. Our framework empowers InsActor to capture complex
relationships between high-level human instructions and character motions by
employing diffusion policies for flexibly conditioned motion planning. To
overcome invalid states and infeasible state transitions in planned motions,
InsActor discovers low-level skills and maps plans to latent skill sequences in
a compact latent space. Extensive experiments demonstrate that InsActor
achieves state-of-the-art results on various tasks, including
instruction-driven motion generation and instruction-driven waypoint heading.
Notably, the ability of InsActor to generate physically simulated animations
using high-level human instructions makes it a valuable tool, particularly in
executing long-horizon tasks with a rich set of instructions.
Related papers
- Human-Object Interaction from Human-Level Instructions [16.70362477046958]
We present the first complete system that can synthesize object motion, full-body motion, and finger motion simultaneously from human-level instructions.
Our experiments demonstrate the effectiveness of our high-level planner in generating plausible target layouts and our low-level motion generator in synthesizing realistic interactions for diverse objects.
arXiv Detail & Related papers (2024-06-25T17:46:28Z) - HYPERmotion: Learning Hybrid Behavior Planning for Autonomous Loco-manipulation [7.01404330241523]
HYPERmotion is a framework that learns, selects and plans behaviors based on tasks in different scenarios.
We combine reinforcement learning with whole-body optimization to generate motion for 38 actuated joints.
Experiments in simulation and real-world show that learned motions can efficiently adapt to new tasks.
arXiv Detail & Related papers (2024-06-20T18:21:24Z) - Universal Humanoid Motion Representations for Physics-Based Control [71.46142106079292]
We present a universal motion representation that encompasses a comprehensive range of motor skills for physics-based humanoid control.
We first learn a motion imitator that can imitate all of human motion from a large, unstructured motion dataset.
We then create our motion representation by distilling skills directly from the imitator.
arXiv Detail & Related papers (2023-10-06T20:48:43Z) - Motion In-Betweening with Phase Manifolds [29.673541655825332]
This paper introduces a novel data-driven motion in-betweening system to reach target poses of characters by making use of phases variables learned by a Periodic Autoencoder.
Our approach utilizes a mixture-of-experts neural network model, in which the phases cluster movements in both space and time with different expert weights.
arXiv Detail & Related papers (2023-08-24T12:56:39Z) - Task-Oriented Human-Object Interactions Generation with Implicit Neural
Representations [61.659439423703155]
TOHO: Task-Oriented Human-Object Interactions Generation with Implicit Neural Representations.
Our method generates continuous motions that are parameterized only by the temporal coordinate.
This work takes a step further toward general human-scene interaction simulation.
arXiv Detail & Related papers (2023-03-23T09:31:56Z) - ASE: Large-Scale Reusable Adversarial Skill Embeddings for Physically
Simulated Characters [123.88692739360457]
General-purpose motor skills enable humans to perform complex tasks.
These skills also provide powerful priors for guiding their behaviors when learning new tasks.
We present a framework for learning versatile and reusable skill embeddings for physically simulated characters.
arXiv Detail & Related papers (2022-05-04T06:13:28Z) - Learning Riemannian Manifolds for Geodesic Motion Skills [19.305285090233063]
We develop a learning framework that allows robots to learn new skills and adapt them to unseen situations.
We show how geodesic motion skills let a robot plan movements from and to arbitrary points on a data manifold.
We test our learning framework using a 7-DoF robotic manipulator, where the robot satisfactorily learns and reproduces realistic skills featuring elaborated motion patterns.
arXiv Detail & Related papers (2021-06-08T13:24:54Z) - AMP: Adversarial Motion Priors for Stylized Physics-Based Character
Control [145.61135774698002]
We propose a fully automated approach to selecting motion for a character to track in a given scenario.
High-level task objectives that the character should perform can be specified by relatively simple reward functions.
Low-level style of the character's behaviors can be specified by a dataset of unstructured motion clips.
Our system produces high-quality motions comparable to those achieved by state-of-the-art tracking-based techniques.
arXiv Detail & Related papers (2021-04-05T22:43:14Z) - UniCon: Universal Neural Controller For Physics-based Character Motion [70.45421551688332]
We propose a physics-based universal neural controller (UniCon) that learns to master thousands of motions with different styles by learning on large-scale motion datasets.
UniCon can support keyboard-driven control, compose motion sequences drawn from a large pool of locomotion and acrobatics skills and teleport a person captured on video to a physics-based virtual avatar.
arXiv Detail & Related papers (2020-11-30T18:51:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.