Top Program Construction and Reduction for polynomial time
Meta-Interpretive Learning
- URL: http://arxiv.org/abs/2101.05050v1
- Date: Wed, 13 Jan 2021 13:39:21 GMT
- Title: Top Program Construction and Reduction for polynomial time
Meta-Interpretive Learning
- Authors: Stassa Patsantzis, Stephen H. Muggleton
- Abstract summary: We show how an exponentially-growing search can be replaced by the construction of a Top program.
We implement our algorithm in Prolog as the basis of a new MIL system, Louise, that constructs a Top program.
We compare Louise to the state-of-the-art search-based MIL system Metagol in experiments on grid world navigation, graph connectedness and grammar learning datasets.
- Score: 8.680676599607125
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Meta-Interpretive Learners, like most ILP systems, learn by searching for a
correct hypothesis in the hypothesis space, the powerset of all constructible
clauses. We show how this exponentially-growing search can be replaced by the
construction of a Top program: the set of clauses in all correct hypotheses
that is itself a correct hypothesis. We give an algorithm for Top program
construction and show that it constructs a correct Top program in polynomial
time and from a finite number of examples. We implement our algorithm in Prolog
as the basis of a new MIL system, Louise, that constructs a Top program and
then reduces it by removing redundant clauses. We compare Louise to the
state-of-the-art search-based MIL system Metagol in experiments on grid world
navigation, graph connectedness and grammar learning datasets and find that
Louise improves on Metagol's predictive accuracy when the hypothesis space and
the target theory are both large, or when the hypothesis space does not include
a correct hypothesis because of "classification noise" in the form of
mislabelled examples. When the hypothesis space or the target theory are small,
Louise and Metagol perform equally well.
Related papers
- Causal Discovery over High-Dimensional Structured Hypothesis Spaces with Causal Graph Partitioning [0.742246975663779]
Causal discovery allows us to infer mechanisms as sets of cause and effect relationships in a generalized way.
We show our algorithm achieves comparable accuracy and a faster time to solution for biologically-tuned synthetic networks and networks up to $104$ variables.
arXiv Detail & Related papers (2024-06-10T15:08:14Z) - ATG: Benchmarking Automated Theorem Generation for Generative Language Models [83.93978859348313]
Humans can develop new theorems to explore broader and more complex mathematical results.
Current generative language models (LMs) have achieved significant improvement in automatically proving theorems.
This paper proposes an Automated Theorem Generation benchmark that evaluates whether an agent can automatically generate valuable (and possibly brand new) theorems.
arXiv Detail & Related papers (2024-05-05T02:06:37Z) - A Geometric Notion of Causal Probing [91.14470073637236]
In a language model's representation space, all information about a concept such as verbal number is encoded in a linear subspace.
We give a set of intrinsic criteria which characterize an ideal linear concept subspace.
We find that LEACE returns a one-dimensional subspace containing roughly half of total concept information.
arXiv Detail & Related papers (2023-07-27T17:57:57Z) - TheoremQA: A Theorem-driven Question Answering dataset [100.39878559382694]
GPT-4's capabilities to solve these problems are unparalleled, achieving an accuracy of 51% with Program-of-Thoughts Prompting.
TheoremQA is curated by domain experts containing 800 high-quality questions covering 350 theorems.
arXiv Detail & Related papers (2023-05-21T17:51:35Z) - Learning logic programs by discovering where not to search [18.27510863075184]
We introduce an approach that, before searching for a hypothesis, first discovers where not to search'
We use given BK to discover constraints on hypotheses, such as that a number cannot be both even and odd.
Our experiments on multiple domains show that our approach can substantially reduce learning times.
arXiv Detail & Related papers (2022-02-20T12:32:03Z) - Rationales for Sequential Predictions [117.93025782838123]
Sequence models are a critical component of modern NLP systems, but their predictions are difficult to explain.
We consider model explanations though rationales, subsets of context that can explain individual model predictions.
We propose an efficient greedy algorithm to approximate this objective.
arXiv Detail & Related papers (2021-09-14T01:25:15Z) - Learning logic programs by explaining their failures [26.955785230358963]
We introduce failure explanation techniques for inductive logic programming.
If a hypothesis fails, we explain the failure in terms of failing sub-programs.
We show that fine-grained failure analysis allows for learning fine-grained constraints on the hypothesis space.
arXiv Detail & Related papers (2021-02-18T14:32:20Z) - Empirically Verifying Hypotheses Using Reinforcement Learning [58.09414653169534]
This paper formulates hypothesis verification as an RL problem.
We aim to build an agent that, given a hypothesis about the dynamics of the world, can take actions to generate observations which can help predict whether the hypothesis is true or false.
arXiv Detail & Related papers (2020-06-29T01:01:10Z) - L2R2: Leveraging Ranking for Abductive Reasoning [65.40375542988416]
The abductive natural language inference task ($alpha$NLI) is proposed to evaluate the abductive reasoning ability of a learning system.
A novel $L2R2$ approach is proposed under the learning-to-rank framework.
Experiments on the ART dataset reach the state-of-the-art in the public leaderboard.
arXiv Detail & Related papers (2020-05-22T15:01:23Z) - Learning programs by learning from failures [26.955785230358963]
We describe an inductive logic programming (ILP) approach called learning from failures.
In this approach, an ILP system decomposes the learning problem into three separate stages: generate, test, and constrain.
We introduce Popper, an ILP system that implements this approach by combining answer set programming and Prolog.
arXiv Detail & Related papers (2020-05-05T14:55:07Z) - Learning large logic programs by going beyond entailment [18.27510863075184]
We implement our idea in Brute, a new ILP system which uses best-first search, guided by an example-dependent loss function, to incrementally build programs.
Our experiments show that Brute can substantially outperform existing ILP systems in terms of predictive accuracies and learning times.
arXiv Detail & Related papers (2020-04-21T09:31:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.