DreamCoder: Growing generalizable, interpretable knowledge with
wake-sleep Bayesian program learning
- URL: http://arxiv.org/abs/2006.08381v1
- Date: Mon, 15 Jun 2020 13:06:29 GMT
- Title: DreamCoder: Growing generalizable, interpretable knowledge with
wake-sleep Bayesian program learning
- Authors: Kevin Ellis, Catherine Wong, Maxwell Nye, Mathias Sable-Meyer, Luc
Cary, Lucas Morales, Luke Hewitt, Armando Solar-Lezama, Joshua B. Tenenbaum
- Abstract summary: We present DreamCoder, a system that learns to solve problems by writing programs.
It builds expertise by creating programming languages for expressing domain concepts, together with neural networks.
A wake-sleep'' learning algorithm alternately extends the language with new symbolic abstractions and trains the neural network on imagined and replayed problems.
- Score: 47.910312960048174
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Expert problem-solving is driven by powerful languages for thinking about
problems and their solutions. Acquiring expertise means learning these
languages -- systems of concepts, alongside the skills to use them. We present
DreamCoder, a system that learns to solve problems by writing programs. It
builds expertise by creating programming languages for expressing domain
concepts, together with neural networks to guide the search for programs within
these languages. A ``wake-sleep'' learning algorithm alternately extends the
language with new symbolic abstractions and trains the neural network on
imagined and replayed problems. DreamCoder solves both classic inductive
programming tasks and creative tasks such as drawing pictures and building
scenes. It rediscovers the basics of modern functional programming, vector
algebra and classical physics, including Newton's and Coulomb's laws. Concepts
are built compositionally from those learned earlier, yielding multi-layered
symbolic representations that are interpretable and transferrable to new tasks,
while still growing scalably and flexibly with experience.
Related papers
- Neuromorphic Programming: Emerging Directions for Brain-Inspired Hardware [0.0]
Currently, neuromorphic hardware often relies on machine learning methods adapted from deep learning.
Neuromorphic computers have potential far beyond deep learning if we can only harness their energy efficiency and full computational power.
This paper presents a conceptual analysis of programming within the context of neuromorphic computing.
arXiv Detail & Related papers (2024-10-15T10:08:15Z) - CodeGRAG: Bridging the Gap between Natural Language and Programming Language via Graphical Retrieval Augmented Generation [58.84212778960507]
We propose CodeGRAG, a Graphical Retrieval Augmented Code Generation framework to enhance the performance of LLMs.
CodeGRAG builds the graphical view of code blocks based on the control flow and data flow of them to fill the gap between programming languages and natural language.
Various experiments and ablations are done on four datasets including both the C++ and python languages to validate the hard meta-graph prompt, the soft prompting technique, and the effectiveness of the objectives for pretrained GNN expert.
arXiv Detail & Related papers (2024-05-03T02:48:55Z) - Language Evolution with Deep Learning [49.879239655532324]
Computational modeling plays an essential role in the study of language emergence.
It aims to simulate the conditions and learning processes that could trigger the emergence of a structured language.
This chapter explores another class of computational models that have recently revolutionized the field of machine learning: deep learning models.
arXiv Detail & Related papers (2024-03-18T16:52:54Z) - Linguacodus: A Synergistic Framework for Transformative Code Generation in Machine Learning Pipelines [0.0]
We introduce a dynamic pipeline that transforms natural language task descriptions into code through high-level data-shaping instructions.
This paper details the fine-tuning process, and sheds light on how natural language descriptions can be translated into functional code.
We propose an algorithm capable of transforming a natural description of an ML task into code with minimal human interaction.
arXiv Detail & Related papers (2024-03-18T08:58:47Z) - PwR: Exploring the Role of Representations in Conversational Programming [17.838776812138626]
We introduce Programming with Representations (PwR), an approach that uses representations to convey the system's understanding back to the user in natural language.
We find that representations significantly improve understandability, and instilled a sense of agency among our participants.
arXiv Detail & Related papers (2023-09-18T05:38:23Z) - Promptly: Using Prompt Problems to Teach Learners How to Effectively
Utilize AI Code Generators [5.458849730200646]
This paper introduces a novel pedagogical concept known as a Prompt Problem'
A Prompt Problem challenges a student to create a natural language prompt that leads an LLM to produce the correct code for a specific problem.
We report empirical findings from a field study in which Promptly was deployed in a first-year Python programming course.
arXiv Detail & Related papers (2023-07-31T01:46:42Z) - Retentive or Forgetful? Diving into the Knowledge Memorizing Mechanism
of Language Models [49.39276272693035]
Large-scale pre-trained language models have shown remarkable memorizing ability.
Vanilla neural networks without pre-training have been long observed suffering from the catastrophic forgetting problem.
We find that 1) Vanilla language models are forgetful; 2) Pre-training leads to retentive language models; 3) Knowledge relevance and diversification significantly influence the memory formation.
arXiv Detail & Related papers (2023-05-16T03:50:38Z) - Improving Compositionality of Neural Networks by Decoding
Representations to Inputs [83.97012077202882]
We bridge the benefits of traditional and deep learning programs by jointly training a generative model to constrain neural network activations to "decode" back to inputs.
We demonstrate applications of decodable representations to out-of-distribution detection, adversarial examples, calibration, and fairness.
arXiv Detail & Related papers (2021-06-01T20:07:16Z) - Neurocoder: Learning General-Purpose Computation Using Stored Neural
Programs [64.56890245622822]
Neurocoder is an entirely new class of general-purpose conditional computational machines.
It "codes" itself in a data-responsive way by composing relevant programs from a set of shareable, modular programs.
We show new capacity to learn modular programs, handle severe pattern shifts and remember old programs as new ones are learnt.
arXiv Detail & Related papers (2020-09-24T01:39:16Z) - Turning 30: New Ideas in Inductive Logic Programming [18.581514902689346]
inductive logic programming is a form of machine learning that induces logic programs from data.
We focus on new methods for learning programs that generalise from few examples.
We also discuss directions for future research in inductive logic programming.
arXiv Detail & Related papers (2020-02-25T16:23:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.