Top-Down Synthesis for Library Learning
- URL: http://arxiv.org/abs/2211.16605v1
- Date: Tue, 29 Nov 2022 21:57:42 GMT
- Title: Top-Down Synthesis for Library Learning
- Authors: Matthew Bowers, Theo X. Olausson, Catherine Wong, Gabriel Grand,
Joshua B. Tenenbaum, Kevin Ellis, Armando Solar-Lezama
- Abstract summary: corpus-guided top-down synthesis is a mechanism for synthesizing library functions that capture common functionality from a corpus of programs.
We present an implementation of the approach in a tool called Stitch and evaluate it against the state-of-the-art deductive library learning algorithm from DreamCoder.
- Score: 46.285220926554345
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces corpus-guided top-down synthesis as a mechanism for
synthesizing library functions that capture common functionality from a corpus
of programs in a domain specific language (DSL). The algorithm builds
abstractions directly from initial DSL primitives, using syntactic pattern
matching of intermediate abstractions to intelligently prune the search space
and guide the algorithm towards abstractions that maximally capture shared
structures in the corpus. We present an implementation of the approach in a
tool called Stitch and evaluate it against the state-of-the-art deductive
library learning algorithm from DreamCoder. Our evaluation shows that Stitch is
3-4 orders of magnitude faster and uses 2 orders of magnitude less memory while
maintaining comparable or better library quality (as measured by
compressivity). We also demonstrate Stitch's scalability on corpora containing
hundreds of complex programs that are intractable with prior deductive
approaches and show empirically that it is robust to terminating the search
procedure early -- further allowing it to scale to challenging datasets by
means of early stopping.
Related papers
- AbstractBeam: Enhancing Bottom-Up Program Synthesis using Library Learning [0.0]
AbstractBeam is a novel program synthesis framework designed to enhance LambdaBeam by leveraging Library Learning.
Our experiments demonstrate that AbstractBeam statistically significantly outperforms LambdaBeam in the integer list manipulation domain.
arXiv Detail & Related papers (2024-05-27T08:31:12Z) - Sketch and shift: a robust decoder for compressive clustering [17.627195350266796]
Compressive learning is an emerging approach to drastically reduce the memory footprint of large-scale learning.
We propose an alternative decoder offering substantial improvements over CL-OMPR.
The proposed algorithm can extract clustering information from a sketch of the MNIST dataset that is 10 times smaller than previously.
arXiv Detail & Related papers (2023-12-15T16:53:55Z) - LILO: Learning Interpretable Libraries by Compressing and Documenting Code [71.55208585024198]
We introduce LILO, a neurosymbolic framework that iteratively synthesizes, compresses, and documents code.
LILO combines LLM-guided program synthesis with recent algorithmic advances in automated from Stitch.
We find that AutoDoc boosts performance by helping LILO's synthesizer to interpret and deploy learned abstractions.
arXiv Detail & Related papers (2023-10-30T17:55:02Z) - Tram: A Token-level Retrieval-augmented Mechanism for Source Code Summarization [76.57699934689468]
We propose a fine-grained Token-level retrieval-augmented mechanism (Tram) on the decoder side to enhance the performance of neural models.
To overcome the challenge of token-level retrieval in capturing contextual code semantics, we also propose integrating code semantics into individual summary tokens.
arXiv Detail & Related papers (2023-05-18T16:02:04Z) - ShapeCoder: Discovering Abstractions for Visual Programs from
Unstructured Primitives [44.01940125080666]
We present ShapeCoder, the first system capable of taking a dataset of shapes, represented with unstructured primitives.
We show how ShapeCoder discovers a library of abstractions that capture high-level relationships, remove extraneous degrees of freedom, and achieve better dataset compression.
arXiv Detail & Related papers (2023-05-09T17:55:48Z) - Leveraging Language to Learn Program Abstractions and Search Heuristics [66.28391181268645]
We introduce LAPS (Language for Abstraction and Program Search), a technique for using natural language annotations to guide joint learning of libraries and neurally-guided search models for synthesis.
When integrated into a state-of-the-art library learning system (DreamCoder), LAPS produces higher-quality libraries and improves search efficiency and generalization.
arXiv Detail & Related papers (2021-06-18T15:08:47Z) - BUSTLE: Bottom-Up Program Synthesis Through Learning-Guided Exploration [72.88493072196094]
We present a new synthesis approach that leverages learning to guide a bottom-up search over programs.
In particular, we train a model to prioritize compositions of intermediate values during search conditioned on a set of input-output examples.
We show that the combination of learning and bottom-up search is remarkably effective, even with simple supervised learning approaches.
arXiv Detail & Related papers (2020-07-28T17:46:18Z) - Torch-Struct: Deep Structured Prediction Library [138.5262350501951]
We introduce Torch-Struct, a library for structured prediction.
Torch-Struct includes a broad collection of probabilistic structures accessed through a simple and flexible distribution-based API.
arXiv Detail & Related papers (2020-02-03T16:43:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.