Learning a Hierarchical Planner from Humans in Multiple Generations
- URL: http://arxiv.org/abs/2310.11614v1
- Date: Tue, 17 Oct 2023 22:28:13 GMT
- Title: Learning a Hierarchical Planner from Humans in Multiple Generations
- Authors: Leonardo Hernandez Cano, Yewen Pu, Robert D. Hawkins, Josh Tenenbaum,
Armando Solar-Lezama
- Abstract summary: We present natural programming, a library learning system that combines programmatic learning with a hierarchical planner.
A user teaches the system via curriculum building, by identifying a challenging yet not impossible goal.
The system solves for the goal via hierarchical planning, using the linguistic hints to guide its probability distribution.
- Score: 21.045112705349222
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A typical way in which a machine acquires knowledge from humans is by
programming. Compared to learning from demonstrations or experiences,
programmatic learning allows the machine to acquire a novel skill as soon as
the program is written, and, by building a library of programs, a machine can
quickly learn how to perform complex tasks. However, as programs often take
their execution contexts for granted, they are brittle when the contexts
change, making it difficult to adapt complex programs to new contexts. We
present natural programming, a library learning system that combines
programmatic learning with a hierarchical planner. Natural programming
maintains a library of decompositions, consisting of a goal, a linguistic
description of how this goal decompose into sub-goals, and a concrete instance
of its decomposition into sub-goals. A user teaches the system via curriculum
building, by identifying a challenging yet not impossible goal along with
linguistic hints on how this goal may be decomposed into sub-goals. The system
solves for the goal via hierarchical planning, using the linguistic hints to
guide its probability distribution in proposing the right plans. The system
learns from this interaction by adding newly found decompositions in the
successful search into its library. Simulated studies and a human experiment
(n=360) on a controlled environment demonstrate that natural programming can
robustly compose programs learned from different users and contexts, adapting
faster and solving more complex tasks when compared to programmatic baselines.
Related papers
- Finding structure in logographic writing with library learning [55.63800121311418]
We develop a computational framework for discovering structure in a writing system.
Our framework discovers known linguistic structures in the Chinese writing system.
We demonstrate how a library learning approach may help reveal the fundamental computational principles that underlie the creation of structures in human cognition.
arXiv Detail & Related papers (2024-05-11T04:23:53Z) - Language Evolution with Deep Learning [49.879239655532324]
Computational modeling plays an essential role in the study of language emergence.
It aims to simulate the conditions and learning processes that could trigger the emergence of a structured language.
This chapter explores another class of computational models that have recently revolutionized the field of machine learning: deep learning models.
arXiv Detail & Related papers (2024-03-18T16:52:54Z) - PwR: Exploring the Role of Representations in Conversational Programming [17.838776812138626]
We introduce Programming with Representations (PwR), an approach that uses representations to convey the system's understanding back to the user in natural language.
We find that representations significantly improve understandability, and instilled a sense of agency among our participants.
arXiv Detail & Related papers (2023-09-18T05:38:23Z) - Neuro-Symbolic Causal Language Planning with Commonsense Prompting [67.06667162430118]
Language planning aims to implement complex high-level goals by decomposition into simpler low-level steps.
Previous methods require either manual exemplars or annotated programs to acquire such ability from large language models.
This paper proposes Neuro-Symbolic Causal Language Planner (CLAP) that elicits procedural knowledge from the LLMs with commonsense-infused prompting.
arXiv Detail & Related papers (2022-06-06T22:09:52Z) - What Matters in Language Conditioned Robotic Imitation Learning [26.92329260907805]
We study the most critical challenges in learning language conditioned policies from offline free-form imitation datasets.
We present a novel approach that significantly outperforms the state of the art on the challenging language conditioned long-horizon robot manipulation CALVIN benchmark.
arXiv Detail & Related papers (2022-04-13T08:45:32Z) - Learning compositional programs with arguments and sampling [12.790055619773565]
We train a machine learning model to discover a program that satisfies specific requirements.
We extend upon a state of the art model, AlphaNPI, by learning to generate functions that can accept arguments.
arXiv Detail & Related papers (2021-09-01T21:27:41Z) - Leveraging Language to Learn Program Abstractions and Search Heuristics [66.28391181268645]
We introduce LAPS (Language for Abstraction and Program Search), a technique for using natural language annotations to guide joint learning of libraries and neurally-guided search models for synthesis.
When integrated into a state-of-the-art library learning system (DreamCoder), LAPS produces higher-quality libraries and improves search efficiency and generalization.
arXiv Detail & Related papers (2021-06-18T15:08:47Z) - Learning Adaptive Language Interfaces through Decomposition [89.21937539950966]
We introduce a neural semantic parsing system that learns new high-level abstractions through decomposition.
Users interactively teach the system by breaking down high-level utterances describing novel behavior into low-level steps.
arXiv Detail & Related papers (2020-10-11T08:27:07Z) - BUSTLE: Bottom-Up Program Synthesis Through Learning-Guided Exploration [72.88493072196094]
We present a new synthesis approach that leverages learning to guide a bottom-up search over programs.
In particular, we train a model to prioritize compositions of intermediate values during search conditioned on a set of input-output examples.
We show that the combination of learning and bottom-up search is remarkably effective, even with simple supervised learning approaches.
arXiv Detail & Related papers (2020-07-28T17:46:18Z) - Deep compositional robotic planners that follow natural language
commands [21.481360281719006]
We show how a sampling-based robotic planner can be augmented to learn to understand a sequence of natural language commands.
Our approach combines a deep network structured according to the parse of a complex command that includes objects, verbs, spatial relations, and attributes.
arXiv Detail & Related papers (2020-02-12T19:56:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.