Learning logic programs by finding minimal unsatisfiable subprograms
- URL: http://arxiv.org/abs/2401.16383v1
- Date: Mon, 29 Jan 2024 18:24:16 GMT
- Title: Learning logic programs by finding minimal unsatisfiable subprograms
- Authors: Andrew Cropper and C\'eline Hocquette
- Abstract summary: We introduce an ILP approach that identifies minimal unsatisfiable subprograms (MUSPs)
Our experiments on multiple domains, including program synthesis and game playing, show that our approach can reduce learning times by 99%.
- Score: 24.31242130341093
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The goal of inductive logic programming (ILP) is to search for a logic
program that generalises training examples and background knowledge. We
introduce an ILP approach that identifies minimal unsatisfiable subprograms
(MUSPs). We show that finding MUSPs allows us to efficiently and soundly prune
the search space. Our experiments on multiple domains, including program
synthesis and game playing, show that our approach can reduce learning times by
99%.
Related papers
- Learning logic programs by combining programs [24.31242130341093]
We introduce an approach where we learn small non-separable programs and combine them.
We implement our approach in a constraint-driven ILP system.
Our experiments on multiple domains, including game playing and program synthesis, show that our approach can drastically outperform existing approaches.
arXiv Detail & Related papers (2022-06-01T10:07:37Z) - CrossBeam: Learning to Search in Bottom-Up Program Synthesis [51.37514793318815]
We propose training a neural model to learn a hands-on search policy for bottom-up synthesis.
Our approach, called CrossBeam, uses the neural model to choose how to combine previously-explored programs into new programs.
We observe that CrossBeam learns to search efficiently, exploring much smaller portions of the program space compared to the state-of-the-art.
arXiv Detail & Related papers (2022-03-20T04:41:05Z) - Learning logic programs by discovering where not to search [18.27510863075184]
We introduce an approach that, before searching for a hypothesis, first discovers where not to search'
We use given BK to discover constraints on hypotheses, such as that a number cannot be both even and odd.
Our experiments on multiple domains show that our approach can substantially reduce learning times.
arXiv Detail & Related papers (2022-02-20T12:32:03Z) - Learning logic programs through divide, constrain, and conquer [22.387008072671005]
We introduce an inductive logic programming approach that combines classical divide-and-conquer search with modern constraint-driven search.
Our experiments on three domains (classification, inductive general game playing, and program synthesis) show that our approach can increase predictive accuracies and reduce learning times.
arXiv Detail & Related papers (2021-09-16T09:08:04Z) - Searching for More Efficient Dynamic Programs [61.79535031840558]
We describe a set of program transformations, a simple metric for assessing the efficiency of a transformed program, and a search procedure to improve this metric.
We show that in practice, automated search can find substantial improvements to the initial program.
arXiv Detail & Related papers (2021-09-14T20:52:55Z) - Enforcing Consistency in Weakly Supervised Semantic Parsing [68.2211621631765]
We explore the use of consistency between the output programs for related inputs to reduce the impact of spurious programs.
We find that a more consistent formalism leads to improved model performance even without consistency-based training.
arXiv Detail & Related papers (2021-07-13T03:48:04Z) - Leveraging Language to Learn Program Abstractions and Search Heuristics [66.28391181268645]
We introduce LAPS (Language for Abstraction and Program Search), a technique for using natural language annotations to guide joint learning of libraries and neurally-guided search models for synthesis.
When integrated into a state-of-the-art library learning system (DreamCoder), LAPS produces higher-quality libraries and improves search efficiency and generalization.
arXiv Detail & Related papers (2021-06-18T15:08:47Z) - BUSTLE: Bottom-Up Program Synthesis Through Learning-Guided Exploration [72.88493072196094]
We present a new synthesis approach that leverages learning to guide a bottom-up search over programs.
In particular, we train a model to prioritize compositions of intermediate values during search conditioned on a set of input-output examples.
We show that the combination of learning and bottom-up search is remarkably effective, even with simple supervised learning approaches.
arXiv Detail & Related papers (2020-07-28T17:46:18Z) - Strong Generalization and Efficiency in Neural Programs [69.18742158883869]
We study the problem of learning efficient algorithms that strongly generalize in the framework of neural program induction.
By carefully designing the input / output interfaces of the neural model and through imitation, we are able to learn models that produce correct results for arbitrary input sizes.
arXiv Detail & Related papers (2020-07-07T17:03:02Z) - Learning large logic programs by going beyond entailment [18.27510863075184]
We implement our idea in Brute, a new ILP system which uses best-first search, guided by an example-dependent loss function, to incrementally build programs.
Our experiments show that Brute can substantially outperform existing ILP systems in terms of predictive accuracies and learning times.
arXiv Detail & Related papers (2020-04-21T09:31:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.