Learning to Solve Voxel Building Embodied Tasks from Pixels and Natural
Language Instructions
- URL: http://arxiv.org/abs/2211.00688v1
- Date: Tue, 1 Nov 2022 18:30:42 GMT
- Title: Learning to Solve Voxel Building Embodied Tasks from Pixels and Natural
Language Instructions
- Authors: Alexey Skrynnik, Zoya Volovikova, Marc-Alexandre C\^ot\'e, Anton
Voronov, Artem Zholus, Negar Arabzadeh, Shrestha Mohanty, Milagro Teruel,
Ahmed Awadallah, Aleksandr Panov, Mikhail Burtsev, Julia Kiseleva
- Abstract summary: We propose a new method that combines a language model and reinforcement learning for the task of building objects in a Minecraft-like environment.
Our method first generates a set of consistently achievable sub-goals from the instructions and then completes associated sub-tasks with a pre-trained RL policy.
- Score: 53.21504989297547
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The adoption of pre-trained language models to generate action plans for
embodied agents is a promising research strategy. However, execution of
instructions in real or simulated environments requires verification of the
feasibility of actions as well as their relevance to the completion of a goal.
We propose a new method that combines a language model and reinforcement
learning for the task of building objects in a Minecraft-like environment
according to the natural language instructions. Our method first generates a
set of consistently achievable sub-goals from the instructions and then
completes associated sub-tasks with a pre-trained RL policy. The proposed
method formed the RL baseline at the IGLU 2022 competition.
Related papers
- Instruction Following with Goal-Conditioned Reinforcement Learning in Virtual Environments [42.06453257292203]
We propose a hierarchical framework that combines the deep language comprehension of large language models with the adaptive action-execution capabilities of reinforcement learning agents.
We have demonstrated the effectiveness of our approach in two different environments: in IGLU, where agents are instructed to build structures, and in Crafter, where agents perform tasks and interact with objects in the surrounding environment according to language commands.
arXiv Detail & Related papers (2024-07-12T14:19:36Z) - Natural Language as Policies: Reasoning for Coordinate-Level Embodied Control with LLMs [7.746160514029531]
We demonstrate experimental results with LLMs that address robotics task planning problems.
Our approach acquires text descriptions of the task and scene objects, then formulates task planning through natural language reasoning.
Our approach is evaluated on a multi-modal prompt simulation benchmark.
arXiv Detail & Related papers (2024-03-20T17:58:12Z) - DECIDER: A Dual-System Rule-Controllable Decoding Framework for Language Generation [57.07295906718989]
Constrained decoding approaches aim to control the meaning or style of text generated by a Pre-trained Language Model (PLM) using specific target words during inference.
We propose a novel decoding framework, DECIDER, which enables us to program rules on how we complete tasks to control a PLM.
arXiv Detail & Related papers (2024-03-04T11:49:08Z) - Neuro-Symbolic Causal Language Planning with Commonsense Prompting [67.06667162430118]
Language planning aims to implement complex high-level goals by decomposition into simpler low-level steps.
Previous methods require either manual exemplars or annotated programs to acquire such ability from large language models.
This paper proposes Neuro-Symbolic Causal Language Planner (CLAP) that elicits procedural knowledge from the LLMs with commonsense-infused prompting.
arXiv Detail & Related papers (2022-06-06T22:09:52Z) - Pre-Trained Language Models for Interactive Decision-Making [72.77825666035203]
We describe a framework for imitation learning in which goals and observations are represented as a sequence of embeddings.
We demonstrate that this framework enables effective generalization across different environments.
For test tasks involving novel goals or novel scenes, initializing policies with language models improves task completion rates by 43.6%.
arXiv Detail & Related papers (2022-02-03T18:55:52Z) - Language Models as Zero-Shot Planners: Extracting Actionable Knowledge
for Embodied Agents [111.33545170562337]
We investigate the possibility of grounding high-level tasks, expressed in natural language, to a chosen set of actionable steps.
We find that if pre-trained LMs are large enough and prompted appropriately, they can effectively decompose high-level tasks into low-level plans.
We propose a procedure that conditions on existing demonstrations and semantically translates the plans to admissible actions.
arXiv Detail & Related papers (2022-01-18T18:59:45Z) - Skill Induction and Planning with Latent Language [94.55783888325165]
We formulate a generative model of action sequences in which goals generate sequences of high-level subtask descriptions.
We describe how to train this model using primarily unannotated demonstrations by parsing demonstrations into sequences of named high-level subtasks.
In trained models, the space of natural language commands indexes a library of skills; agents can use these skills to plan by generating high-level instruction sequences tailored to novel goals.
arXiv Detail & Related papers (2021-10-04T15:36:32Z) - Grounding Language to Autonomously-Acquired Skills via Goal Generation [23.327749767424567]
We propose a new conceptual approach to language-conditioned RL: the Language-Goal-Behavior architecture (LGB)
LGB decouples skill learning and language grounding via an intermediate semantic representation of the world.
We present DECSTR, an intrinsically motivated learning agent endowed with an innate semantic representation describing spatial relations between physical objects.
arXiv Detail & Related papers (2020-06-12T13:46:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.