PADL: Language-Directed Physics-Based Character Control
- URL: http://arxiv.org/abs/2301.13868v1
- Date: Tue, 31 Jan 2023 18:59:22 GMT
- Title: PADL: Language-Directed Physics-Based Character Control
- Authors: Jordan Juravsky, Yunrong Guo, Sanja Fidler, Xue Bin Peng
- Abstract summary: We present PADL, which allows users to issue natural language commands for specifying high-level tasks and low-level skills that a character should perform.
We show that our framework can be applied to effectively direct a simulated humanoid character to perform a diverse array of complex motor skills.
- Score: 66.517142635815
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Developing systems that can synthesize natural and life-like motions for
simulated characters has long been a focus for computer animation. But in order
for these systems to be useful for downstream applications, they need not only
produce high-quality motions, but must also provide an accessible and versatile
interface through which users can direct a character's behaviors. Natural
language provides a simple-to-use and expressive medium for specifying a user's
intent. Recent breakthroughs in natural language processing (NLP) have
demonstrated effective use of language-based interfaces for applications such
as image generation and program synthesis. In this work, we present PADL, which
leverages recent innovations in NLP in order to take steps towards developing
language-directed controllers for physics-based character animation. PADL
allows users to issue natural language commands for specifying both high-level
tasks and low-level skills that a character should perform. We present an
adversarial imitation learning approach for training policies to map high-level
language commands to low-level controls that enable a character to perform the
desired task and skill specified by a user's commands. Furthermore, we propose
a multi-task aggregation method that leverages a language-based multiple-choice
question-answering approach to determine high-level task objectives from
language commands. We show that our framework can be applied to effectively
direct a simulated humanoid character to perform a diverse array of complex
motor skills.
Related papers
- STEER: Flexible Robotic Manipulation via Dense Language Grounding [16.97343810491996]
STEER is a robot learning framework that bridges high-level, commonsense reasoning with precise, flexible low-level control.
Our approach translates complex situational awareness into actionable low-level behavior through training language-grounded policies with dense annotation.
arXiv Detail & Related papers (2024-11-05T18:48:12Z) - Interpretable Robotic Manipulation from Language [11.207620790833271]
We introduce an explainable behavior cloning agent, named Ex-PERACT, specifically designed for manipulation tasks.
At the top level, the model is tasked with learning a discrete skill code, while at the bottom level, the policy network translates the problem into a voxelized grid and maps the discretized actions to voxel grids.
We evaluate our method across eight challenging manipulation tasks utilizing the RLBench benchmark, demonstrating that Ex-PERACT not only achieves competitive policy performance but also effectively bridges the gap between human instructions and machine execution in complex environments.
arXiv Detail & Related papers (2024-05-27T11:02:21Z) - Natural Language as Policies: Reasoning for Coordinate-Level Embodied Control with LLMs [7.746160514029531]
We demonstrate experimental results with LLMs that address robotics task planning problems.
Our approach acquires text descriptions of the task and scene objects, then formulates task planning through natural language reasoning.
Our approach is evaluated on a multi-modal prompt simulation benchmark.
arXiv Detail & Related papers (2024-03-20T17:58:12Z) - Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions
with Large Language Model [63.66204449776262]
Instruct2Act is a framework that maps multi-modal instructions to sequential actions for robotic manipulation tasks.
Our approach is adjustable and flexible in accommodating various instruction modalities and input types.
Our zero-shot method outperformed many state-of-the-art learning-based policies in several tasks.
arXiv Detail & Related papers (2023-05-18T17:59:49Z) - VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation [11.92150014766458]
We aim to fill the blank of the last mile of embodied agents -- object manipulation by following human guidance.
We build a Vision-and-Language Manipulation benchmark (VLMbench) based on it, containing various language instructions on categorized robotic manipulation tasks.
modular rule-based task templates are created to automatically generate robot demonstrations with language instructions.
arXiv Detail & Related papers (2022-06-17T03:07:18Z) - LISA: Learning Interpretable Skill Abstractions from Language [85.20587800593293]
We propose a hierarchical imitation learning framework that can learn diverse, interpretable skills from language-conditioned demonstrations.
Our method demonstrates a more natural way to condition on language in sequential decision-making problems.
arXiv Detail & Related papers (2022-02-28T19:43:24Z) - Pre-Trained Language Models for Interactive Decision-Making [72.77825666035203]
We describe a framework for imitation learning in which goals and observations are represented as a sequence of embeddings.
We demonstrate that this framework enables effective generalization across different environments.
For test tasks involving novel goals or novel scenes, initializing policies with language models improves task completion rates by 43.6%.
arXiv Detail & Related papers (2022-02-03T18:55:52Z) - Learning Language-Conditioned Robot Behavior from Offline Data and
Crowd-Sourced Annotation [80.29069988090912]
We study the problem of learning a range of vision-based manipulation tasks from a large offline dataset of robot interaction.
We propose to leverage offline robot datasets with crowd-sourced natural language labels.
We find that our approach outperforms both goal-image specifications and language conditioned imitation techniques by more than 25%.
arXiv Detail & Related papers (2021-09-02T17:42:13Z) - Learning Adaptive Language Interfaces through Decomposition [89.21937539950966]
We introduce a neural semantic parsing system that learns new high-level abstractions through decomposition.
Users interactively teach the system by breaking down high-level utterances describing novel behavior into low-level steps.
arXiv Detail & Related papers (2020-10-11T08:27:07Z) - Language Conditioned Imitation Learning over Unstructured Data [9.69886122332044]
We present a method for incorporating free-form natural language conditioning into imitation learning.
Our approach learns perception from pixels, natural language understanding, and multitask continuous control end-to-end as a single neural network.
We show this dramatically improves language conditioned performance, while reducing the cost of language annotation to less than 1% of total data.
arXiv Detail & Related papers (2020-05-15T17:08:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.