ShapeCoder: Discovering Abstractions for Visual Programs from
Unstructured Primitives
- URL: http://arxiv.org/abs/2305.05661v1
- Date: Tue, 9 May 2023 17:55:48 GMT
- Title: ShapeCoder: Discovering Abstractions for Visual Programs from
Unstructured Primitives
- Authors: R. Kenny Jones and Paul Guerrero and Niloy J. Mitra and Daniel Ritchie
- Abstract summary: We present ShapeCoder, the first system capable of taking a dataset of shapes, represented with unstructured primitives.
We show how ShapeCoder discovers a library of abstractions that capture high-level relationships, remove extraneous degrees of freedom, and achieve better dataset compression.
- Score: 44.01940125080666
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Programs are an increasingly popular representation for visual data, exposing
compact, interpretable structure that supports manipulation. Visual programs
are usually written in domain-specific languages (DSLs). Finding "good"
programs, that only expose meaningful degrees of freedom, requires access to a
DSL with a "good" library of functions, both of which are typically authored by
domain experts. We present ShapeCoder, the first system capable of taking a
dataset of shapes, represented with unstructured primitives, and jointly
discovering (i) useful abstraction functions and (ii) programs that use these
abstractions to explain the input shapes. The discovered abstractions capture
common patterns (both structural and parametric) across the dataset, so that
programs rewritten with these abstractions are more compact, and expose fewer
degrees of freedom. ShapeCoder improves upon previous abstraction discovery
methods, finding better abstractions, for more complex inputs, under less
stringent input assumptions. This is principally made possible by two
methodological advancements: (a) a shape to program recognition network that
learns to solve sub-problems and (b) the use of e-graphs, augmented with a
conditional rewrite scheme, to determine when abstractions with complex
parametric expressions can be applied, in a tractable manner. We evaluate
ShapeCoder on multiple datasets of 3D shapes, where primitive decompositions
are either parsed from manual annotations or produced by an unsupervised cuboid
abstraction method. In all domains, ShapeCoder discovers a library of
abstractions that capture high-level relationships, remove extraneous degrees
of freedom, and achieve better dataset compression compared with alternative
approaches. Finally, we investigate how programs rewritten to use discovered
abstractions prove useful for downstream tasks.
Related papers
- How to Handle Sketch-Abstraction in Sketch-Based Image Retrieval? [120.49126407479717]
We propose a sketch-based image retrieval framework capable of handling sketch abstraction at varied levels.
For granularity-level abstraction understanding, we dictate that the retrieval model should not treat all abstraction-levels equally.
Our Acc.@q loss uniquely allows a sketch to narrow/broaden its focus in terms of how stringent the evaluation should be.
arXiv Detail & Related papers (2024-03-11T23:08:29Z) - ReGAL: Refactoring Programs to Discover Generalizable Abstractions [59.05769810380928]
Generalizable Abstraction Learning (ReGAL) is a method for learning a library of reusable functions via codeization.
We find that the shared function libraries discovered by ReGAL make programs easier to predict across diverse domains.
For CodeLlama-13B, ReGAL results in absolute accuracy increases of 11.5% on LOGO, 26.1% on date understanding, and 8.1% on TextCraft, outperforming GPT-3.5 in two of three domains.
arXiv Detail & Related papers (2024-01-29T18:45:30Z) - AbsPyramid: Benchmarking the Abstraction Ability of Language Models with a Unified Entailment Graph [62.685920585838616]
abstraction ability is essential in human intelligence, which remains under-explored in language models.
We present AbsPyramid, a unified entailment graph of 221K textual descriptions of abstraction knowledge.
arXiv Detail & Related papers (2023-11-15T18:11:23Z) - Top-Down Synthesis for Library Learning [46.285220926554345]
corpus-guided top-down synthesis is a mechanism for synthesizing library functions that capture common functionality from a corpus of programs.
We present an implementation of the approach in a tool called Stitch and evaluate it against the state-of-the-art deductive library learning algorithm from DreamCoder.
arXiv Detail & Related papers (2022-11-29T21:57:42Z) - Graphs, Constraints, and Search for the Abstraction and Reasoning Corpus [19.27379168184259]
The Abstraction and Reasoning Corpus (ARC) aims at benchmarking the performance of general artificial intelligence algorithms.
The ARC's focus on broad generalization and few-shot learning has made it impossible to solve using pure machine learning.
We propose Abstract Reasoning with Graph Abstractions (ARGA), a new object-centric framework that first represents images using graphs and then performs a search for a correct program.
arXiv Detail & Related papers (2022-10-18T14:13:43Z) - MDP Abstraction with Successor Features [14.433551477386318]
We study abstraction in the context of reinforcement learning, in which agents may perform state or temporal abstractions.
In this work, we propose successor abstraction, a novel abstraction scheme building on successor features.
Our successor abstraction allows us to learn abstract environment models with semantics that are transferable across different environments.
arXiv Detail & Related papers (2021-10-18T11:35:08Z) - Leveraging Language to Learn Program Abstractions and Search Heuristics [66.28391181268645]
We introduce LAPS (Language for Abstraction and Program Search), a technique for using natural language annotations to guide joint learning of libraries and neurally-guided search models for synthesis.
When integrated into a state-of-the-art library learning system (DreamCoder), LAPS produces higher-quality libraries and improves search efficiency and generalization.
arXiv Detail & Related papers (2021-06-18T15:08:47Z) - Towards a Mathematical Theory of Abstraction [0.0]
We provide a precise characterisation of what an abstraction is and, perhaps more importantly, suggest how abstractions can be learnt directly from data.
Our results have deep implications for statistical inference and machine learning and could be used to develop explicit methods for learning precise kinds of abstractions directly from data.
arXiv Detail & Related papers (2021-06-03T13:23:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.