Communicating Natural Programs to Humans and Machines
- URL: http://arxiv.org/abs/2106.07824v4
- Date: Sat, 20 May 2023 01:19:06 GMT
- Title: Communicating Natural Programs to Humans and Machines
- Authors: Samuel Acquaviva, Yewen Pu, Marta Kryven, Theodoros Sechopoulos,
Catherine Wong, Gabrielle E Ecanow, Maxwell Nye, Michael Henry Tessler,
Joshua B. Tenenbaum
- Abstract summary: Abstraction and Reasoning Corpus (ARC) is a set of procedural tasks that tests an agent's ability to solve flexibly novel problems.
We study the difference of emphlanguage: While humans readily generate and interpret instructions in a general language, computer systems are shackled to a narrow domain-specific language.
We analyze the collected instructions as natural programs', finding that while they resemble computer programs, they are distinct in two ways.
- Score: 43.00640440047027
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Abstraction and Reasoning Corpus (ARC) is a set of procedural tasks that
tests an agent's ability to flexibly solve novel problems. While most ARC tasks
are easy for humans, they are challenging for state-of-the-art AI. What makes
building intelligent systems that can generalize to novel situations such as
ARC difficult? We posit that the answer might be found by studying the
difference of \emph{language}: While humans readily generate and interpret
instructions in a general language, computer systems are shackled to a narrow
domain-specific language that they can precisely execute. We present LARC, the
\textit{Language-complete ARC}: a collection of natural language descriptions
by a group of human participants who instruct each other on how to solve ARC
tasks using language alone, which contains successful instructions for 88\% of
the ARC tasks. We analyze the collected instructions as `natural programs',
finding that while they resemble computer programs, they are distinct in two
ways: First, they contain a wide range of primitives; Second, they frequently
leverage communicative strategies beyond directly executable codes. We
demonstrate that these two distinctions prevent current program synthesis
techniques from leveraging LARC to its full potential, and give concrete
suggestions on how to build the next-generation program synthesizers.
Related papers
- Symbolic Learning Enables Self-Evolving Agents [55.625275970720374]
We introduce agent symbolic learning, a systematic framework that enables language agents to optimize themselves on their own.
Agent symbolic learning is designed to optimize the symbolic network within language agents by mimicking two fundamental algorithms in connectionist learning.
We conduct proof-of-concept experiments on both standard benchmarks and complex real-world tasks.
arXiv Detail & Related papers (2024-06-26T17:59:18Z) - AIOS Compiler: LLM as Interpreter for Natural Language Programming and Flow Programming of AI Agents [38.580779075892636]
We develop a novel system for Code Representation and Execution (CoRE)
The proposed system unifies natural language programming, pseudo-code programming, and flow programming under the same representation for constructing language agents.
During the execution, we incorporate external memory to minimize redundancy.
arXiv Detail & Related papers (2024-05-11T04:29:03Z) - Program Synthesis using Inductive Logic Programming for the Abstraction and Reasoning Corpus [1.9662978733004604]
The Abstraction and Reasoning Corpus (ARC) is unsolvable by any Machine Learning method.
We propose a Program Synthesis system that uses Inductive Logic Programming (ILP), a branch of AI, to solve ARC.
arXiv Detail & Related papers (2024-05-10T11:22:31Z) - Neural networks for abstraction and reasoning: Towards broad
generalization in machines [3.165509887826658]
We look at novel approaches for solving the Abstraction & Reasoning Corpus (ARC)
We adapt the DreamCoder neurosymbolic reasoning solver to ARC.
We present the Perceptual Abstraction and Reasoning Language (PeARL) language, which allows DreamCoder to solve ARC tasks.
We publish the arckit Python library to make future research on ARC easier.
arXiv Detail & Related papers (2024-02-05T20:48:57Z) - kNN-ICL: Compositional Task-Oriented Parsing Generalization with Nearest
Neighbor In-Context Learning [50.40636157214161]
Task-Oriented Parsing (TOP) enables conversational assistants to interpret user commands expressed in natural language.
LLMs have achieved impressive performance in computer programs based on a natural language prompt.
This paper focuses on harnessing the capabilities of LLMs for semantic parsing tasks.
arXiv Detail & Related papers (2023-12-17T17:26:50Z) - Abstract Visual Reasoning Enabled by Language [8.627180519837657]
We propose a general learning-based framework for solving ARC.
It is centered on transforming tasks from the vision to the language domain.
This composition of language and vision allows for pre-trained models to be leveraged at each stage.
arXiv Detail & Related papers (2023-03-07T17:52:46Z) - Neuro-Symbolic Causal Language Planning with Commonsense Prompting [67.06667162430118]
Language planning aims to implement complex high-level goals by decomposition into simpler low-level steps.
Previous methods require either manual exemplars or annotated programs to acquire such ability from large language models.
This paper proposes Neuro-Symbolic Causal Language Planner (CLAP) that elicits procedural knowledge from the LLMs with commonsense-infused prompting.
arXiv Detail & Related papers (2022-06-06T22:09:52Z) - LISA: Learning Interpretable Skill Abstractions from Language [85.20587800593293]
We propose a hierarchical imitation learning framework that can learn diverse, interpretable skills from language-conditioned demonstrations.
Our method demonstrates a more natural way to condition on language in sequential decision-making problems.
arXiv Detail & Related papers (2022-02-28T19:43:24Z) - Neural-guided, Bidirectional Program Search for Abstraction and
Reasoning [3.2348834229786885]
This paper lays the foundations for two approaches to abstraction and reasoning not based in brute-force search.
We first apply an existing program synthesis system called DreamCoder to create symbolic abstractions out of tasks solved so far.
Second, we design a reasoning algorithm motivated by the way humans approach ARC.
arXiv Detail & Related papers (2021-10-22T00:41:47Z) - Leveraging Language to Learn Program Abstractions and Search Heuristics [66.28391181268645]
We introduce LAPS (Language for Abstraction and Program Search), a technique for using natural language annotations to guide joint learning of libraries and neurally-guided search models for synthesis.
When integrated into a state-of-the-art library learning system (DreamCoder), LAPS produces higher-quality libraries and improves search efficiency and generalization.
arXiv Detail & Related papers (2021-06-18T15:08:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.