Knowledge Refactoring for Inductive Program Synthesis
- URL: http://arxiv.org/abs/2004.09931v3
- Date: Tue, 24 Nov 2020 08:23:31 GMT
- Title: Knowledge Refactoring for Inductive Program Synthesis
- Authors: Sebastijan Dumancic and Tias Guns and Andrew Cropper
- Abstract summary: Humans constantly restructure knowledge to use it more efficiently.
Our goal is to give a machine learning system similar abilities so that it can learn more efficiently.
- Score: 37.54933305877746
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humans constantly restructure knowledge to use it more efficiently. Our goal
is to give a machine learning system similar abilities so that it can learn
more efficiently. We introduce the \textit{knowledge refactoring} problem,
where the goal is to restructure a learner's knowledge base to reduce its size
and to minimise redundancy in it. We focus on inductive logic programming,
where the knowledge base is a logic program. We introduce Knorf, a system which
solves the refactoring problem using constraint optimisation. We evaluate our
approach on two program induction domains: real-world string transformations
and building Lego structures. Our experiments show that learning from
refactored knowledge can improve predictive accuracies fourfold and reduce
learning times by half.
Related papers
- Scalable Knowledge Refactoring using Constrained Optimisation [18.706442683121615]
We show that our approach can programs quicker and with more compression than the previous state-of-the-art approach, sometimes by 60%.
Our empirical results on multiple domains show that our approach can programs quicker and with more compression than the previous state-of-the-art approach, sometimes by 60%.
arXiv Detail & Related papers (2024-08-21T11:12:42Z) - ReGAL: Refactoring Programs to Discover Generalizable Abstractions [59.05769810380928]
Generalizable Abstraction Learning (ReGAL) is a method for learning a library of reusable functions via codeization.
We find that the shared function libraries discovered by ReGAL make programs easier to predict across diverse domains.
For CodeLlama-13B, ReGAL results in absolute accuracy increases of 11.5% on LOGO, 26.1% on date understanding, and 8.1% on TextCraft, outperforming GPT-3.5 in two of three domains.
arXiv Detail & Related papers (2024-01-29T18:45:30Z) - Fixing Your Own Smells: Adding a Mistake-Based Familiarisation Step When
Teaching Code Refactoring [2.021502591596062]
Students must first complete a programming exercise to ensure they will produce a code smell.
This simple intervention is based on the idea that learning is easier if students are familiar with the code.
We conducted a study with 35 novice undergraduates in which they completed various exercises alternately taught using a traditional and our'mistake-based' approach.
arXiv Detail & Related papers (2024-01-02T03:39:19Z) - When Do Program-of-Thoughts Work for Reasoning? [51.2699797837818]
We propose complexity-impacted reasoning score (CIRS) to measure correlation between code and reasoning abilities.
Specifically, we use the abstract syntax tree to encode the structural information and calculate logical complexity.
Code will be integrated into the EasyInstruct framework at https://github.com/zjunlp/EasyInstruct.
arXiv Detail & Related papers (2023-08-29T17:22:39Z) - Learning logic programs by discovering higher-order abstractions [20.57989636488575]
We introduce the higher-order optimisation problem.
The goal is to compress a logic program by discovering higher-order abstractions.
We implement our approach in Stevie, which formulates the problem as a constraint problem.
arXiv Detail & Related papers (2023-08-16T12:50:10Z) - Learning by Applying: A General Framework for Mathematical Reasoning via
Enhancing Explicit Knowledge Learning [47.96987739801807]
We propose a framework to enhance existing models (backbones) in a principled way by explicit knowledge learning.
In LeAp, we perform knowledge learning in a novel problem-knowledge-expression paradigm.
We show that LeAp improves all backbones' performances, learns accurate knowledge, and achieves a more interpretable reasoning process.
arXiv Detail & Related papers (2023-02-11T15:15:41Z) - Anti-Retroactive Interference for Lifelong Learning [65.50683752919089]
We design a paradigm for lifelong learning based on meta-learning and associative mechanism of the brain.
It tackles the problem from two aspects: extracting knowledge and memorizing knowledge.
It is theoretically analyzed that the proposed learning paradigm can make the models of different tasks converge to the same optimum.
arXiv Detail & Related papers (2022-08-27T09:27:36Z) - Constraint-driven multi-task learning [18.27510863075184]
In this project, we extend the Popper ILP system to make use of multi-task learning.
We introduce constraint preservation, a technique that improves overall performance for all approaches.
arXiv Detail & Related papers (2022-08-24T16:53:54Z) - Refining neural network predictions using background knowledge [68.35246878394702]
We show we can use logical background knowledge in learning system to compensate for a lack of labeled training data.
We introduce differentiable refinement functions that find a corrected prediction close to the original prediction.
This algorithm finds optimal refinements on complex SAT formulas in significantly fewer iterations and frequently finds solutions where gradient descent can not.
arXiv Detail & Related papers (2022-06-10T10:17:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.