Generating Instructions at Different Levels of Abstraction
- URL: http://arxiv.org/abs/2010.03982v1
- Date: Thu, 8 Oct 2020 13:56:09 GMT
- Title: Generating Instructions at Different Levels of Abstraction
- Authors: Arne K\"ohn and Julia Wichlacz and \'Alvaro Torralba and Daniel
H\"oller and J\"org Hoffmann and Alexander Koller
- Abstract summary: We show how to generate building instructions at different levels of abstraction in Minecraft.
A crowdsourcing evaluation shows that the choice of abstraction level matters to users.
- Score: 61.70390291746106
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: When generating technical instructions, it is often convenient to describe
complex objects in the world at different levels of abstraction. A novice user
might need an object explained piece by piece, while for an expert, talking
about the complex object (e.g. a wall or railing) directly may be more succinct
and efficient. We show how to generate building instructions at different
levels of abstraction in Minecraft. We introduce the use of hierarchical
planning to this end, a method from AI planning which can capture the structure
of complex objects neatly. A crowdsourcing evaluation shows that the choice of
abstraction level matters to users, and that an abstraction strategy which
balances low-level and high-level object descriptions compares favorably to
ones which don't.
Related papers
- Embodied Instruction Following in Unknown Environments [66.60163202450954]
We propose an embodied instruction following (EIF) method for complex tasks in the unknown environment.
We build a hierarchical embodied instruction following framework including the high-level task planner and the low-level exploration controller.
For the task planner, we generate the feasible step-by-step plans for human goal accomplishment according to the task completion process and the known visual clues.
arXiv Detail & Related papers (2024-06-17T17:55:40Z) - How to Handle Sketch-Abstraction in Sketch-Based Image Retrieval? [120.49126407479717]
We propose a sketch-based image retrieval framework capable of handling sketch abstraction at varied levels.
For granularity-level abstraction understanding, we dictate that the retrieval model should not treat all abstraction-levels equally.
Our Acc.@q loss uniquely allows a sketch to narrow/broaden its focus in terms of how stringent the evaluation should be.
arXiv Detail & Related papers (2024-03-11T23:08:29Z) - Structural Concept Learning via Graph Attention for Multi-Level
Rearrangement Planning [2.7195102129095003]
We propose a deep learning approach to perform multi-level object rearrangement planning for scenes with structural dependency hierarchies.
It is trained on a self-generated simulation data set with intuitive structures and works for unseen scenes with an arbitrary number of objects.
We compare our method with a range of classical and model-based baselines to show that our method leverages its scene understanding to achieve better performance, flexibility, and efficiency.
arXiv Detail & Related papers (2023-09-05T19:35:44Z) - Neural Constraint Satisfaction: Hierarchical Abstraction for
Combinatorial Generalization in Object Rearrangement [75.9289887536165]
We present a hierarchical abstraction approach to uncover underlying entities.
We show how to learn a correspondence between intervening on states of entities in the agent's model and acting on objects in the environment.
We use this correspondence to develop a method for control that generalizes to different numbers and configurations of objects.
arXiv Detail & Related papers (2023-03-20T18:19:36Z) - Structured Exploration Through Instruction Enhancement for Object
Navigation [0.0]
We propose a hierarchical learning-based method for object navigation.
The top-level is capable of high-level planning, and building a memory on a floorplan-level.
We demonstrate the effectiveness of our method on a dynamic domestic environment.
arXiv Detail & Related papers (2022-11-15T19:39:22Z) - Identifying concept libraries from language about object structure [56.83719358616503]
We leverage natural language descriptions for a diverse set of 2K procedurally generated objects to identify the parts people use.
We formalize our problem as search over a space of program libraries that contain different part concepts.
By combining naturalistic language at scale with structured program representations, we discover a fundamental information-theoretic tradeoff governing the part concepts people name.
arXiv Detail & Related papers (2022-05-11T17:49:25Z) - Inventing Relational State and Action Abstractions for Effective and
Efficient Bilevel Planning [26.715198108255162]
We develop a novel framework for learning state and action abstractions.
We learn relational, neuro-symbolic abstractions that generalize over object identities and numbers.
We show that our learned abstractions are able to quickly solve held-out tasks of longer horizons.
arXiv Detail & Related papers (2022-03-17T22:13:09Z) - A Persistent Spatial Semantic Representation for High-level Natural
Language Instruction Execution [54.385344986265714]
We propose a persistent spatial semantic representation method to bridge the gap between language and robot actions.
We evaluate our approach on the ALFRED benchmark and achieve state-of-the-art results, despite completely avoiding the commonly used step-by-step instructions.
arXiv Detail & Related papers (2021-07-12T17:47:19Z) - Draw Me a Flower: Grounding Formal Abstract Structures Stated in
Informal Natural Language [6.900102922776184]
We develop the Hexagons referential game, where players describe increasingly complex images on a two-dimensional Hexagons board.
Using this game we collected the Hexagons dataset, which consists of 164 images and over 3000 naturally-occurring instructions.
Results of our baseline models on an instruction-to-execution task derived from the Hexagons dataset confirm that higher-level abstractions in NL are indeed more challenging for current systems to process.
arXiv Detail & Related papers (2021-06-27T21:11:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.