Learning abstract structure for drawing by efficient motor program
induction
- URL: http://arxiv.org/abs/2008.03519v1
- Date: Sat, 8 Aug 2020 13:31:14 GMT
- Title: Learning abstract structure for drawing by efficient motor program
induction
- Authors: Lucas Y. Tian, Kevin Ellis, Marta Kryven, Joshua B. Tenenbaum
- Abstract summary: We develop a naturalistic drawing task to study how humans rapidly acquire structured prior knowledge.
We show that people spontaneously learn abstract drawing procedures that support generalization.
We propose a model of how learners can discover these reusable drawing programs.
- Score: 52.13961975752941
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humans flexibly solve new problems that differ qualitatively from those they
were trained on. This ability to generalize is supported by learned concepts
that capture structure common across different problems. Here we develop a
naturalistic drawing task to study how humans rapidly acquire structured prior
knowledge. The task requires drawing visual objects that share underlying
structure, based on a set of composable geometric rules. We show that people
spontaneously learn abstract drawing procedures that support generalization,
and propose a model of how learners can discover these reusable drawing
programs. Trained in the same setting as humans, and constrained to produce
efficient motor actions, this model discovers new drawing routines that
transfer to test objects and resemble learned features of human sequences.
These results suggest that two principles guiding motor program induction in
the model - abstraction (general programs that ignore object-specific details)
and compositionality (recombining previously learned programs) - are key for
explaining how humans learn structured internal representations that guide
flexible reasoning and learning.
Related papers
- Self-Explainable Affordance Learning with Embodied Caption [63.88435741872204]
We introduce Self-Explainable Affordance learning (SEA) with embodied caption.
SEA enables robots to articulate their intentions and bridge the gap between explainable vision-language caption and visual affordance learning.
We propose a novel model to effectively combine affordance grounding with self-explanation in a simple but efficient manner.
arXiv Detail & Related papers (2024-04-08T15:22:38Z) - Neural Causal Abstractions [63.21695740637627]
We develop a new family of causal abstractions by clustering variables and their domains.
We show that such abstractions are learnable in practical settings through Neural Causal Models.
Our experiments support the theory and illustrate how to scale causal inferences to high-dimensional settings involving image data.
arXiv Detail & Related papers (2024-01-05T02:00:27Z) - Compositional diversity in visual concept learning [18.907108368038216]
Humans leverage compositionality to efficiently learn new concepts, understanding how familiar parts can combine together to form novel objects.
Here, we study how people classify and generate alien figures'' with rich relational structure.
We develop a Bayesian program induction model which searches for the best programs for generating the candidate visual figures.
arXiv Detail & Related papers (2023-05-30T19:30:50Z) - Learning to Infer 3D Shape Programs with Differentiable Renderer [0.0]
We propose an analytical yet differentiable executor that is more faithful and controllable in interpreting shape programs.
These facilitate the generator's learning when ground truth programs are not available.
Preliminary experiments on using it for adaptation illustrate the aforesaid advantages of the proposed module.
arXiv Detail & Related papers (2022-06-25T15:44:05Z) - Constellation: Learning relational abstractions over objects for
compositional imagination [64.99658940906917]
We introduce Constellation, a network that learns relational abstractions of static visual scenes.
This work is a first step in the explicit representation of visual relationships and using them for complex cognitive procedures.
arXiv Detail & Related papers (2021-07-23T11:59:40Z) - A Self-Supervised Framework for Function Learning and Extrapolation [1.9374999427973014]
We present a framework for how a learner may acquire representations that support generalization.
We show the resulting representations outperform those from other models for unsupervised time series learning.
arXiv Detail & Related papers (2021-06-14T12:41:03Z) - Flexible Compositional Learning of Structured Visual Concepts [17.665938343060112]
We study how people learn different types of visual compositions, using abstract visual forms with rich relational structure.
We find that people can make meaningful compositional generalizations from just a few examples in a variety of scenarios.
Unlike past work examining special cases of compositionality, our work shows how a single computational approach can account for many distinct types of compositional generalization.
arXiv Detail & Related papers (2021-05-20T15:48:05Z) - Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges [50.22269760171131]
The last decade has witnessed an experimental revolution in data science and machine learning, epitomised by deep learning methods.
This text is concerned with exposing pre-defined regularities through unified geometric principles.
It provides a common mathematical framework to study the most successful neural network architectures, such as CNNs, RNNs, GNNs, and Transformers.
arXiv Detail & Related papers (2021-04-27T21:09:51Z) - HALMA: Humanlike Abstraction Learning Meets Affordance in Rapid Problem
Solving [104.79156980475686]
Humans learn compositional and causal abstraction, ie, knowledge, in response to the structure of naturalistic tasks.
We argue there shall be three levels of generalization in how an agent represents its knowledge: perceptual, conceptual, and algorithmic.
This benchmark is centered around a novel task domain, HALMA, for visual concept development and rapid problem-solving.
arXiv Detail & Related papers (2021-02-22T20:37:01Z) - Learning intuitive physics and one-shot imitation using
state-action-prediction self-organizing maps [0.0]
Humans learn by exploration and imitation, build causal models of the world, and use both to flexibly solve new tasks.
We suggest a simple but effective unsupervised model which develops such characteristics.
We demonstrate its performance on a set of several related, but different one-shot imitation tasks, which the agent flexibly solves in an active inference style.
arXiv Detail & Related papers (2020-07-03T12:29:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.