Learning logic programs by explaining their failures
- URL: http://arxiv.org/abs/2102.12551v2
- Date: Wed, 24 May 2023 13:16:53 GMT
- Title: Learning logic programs by explaining their failures
- Authors: Rolf Morel, Andrew Cropper
- Abstract summary: We introduce failure explanation techniques for inductive logic programming.
If a hypothesis fails, we explain the failure in terms of failing sub-programs.
We show that fine-grained failure analysis allows for learning fine-grained constraints on the hypothesis space.
- Score: 26.955785230358963
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Scientists form hypotheses and experimentally test them. If a hypothesis
fails (is refuted), scientists try to explain the failure to eliminate other
hypotheses. The more precise the failure analysis the more hypotheses can be
eliminated. Thus inspired, we introduce failure explanation techniques for
inductive logic programming. Given a hypothesis represented as a logic program,
we test it on examples. If a hypothesis fails, we explain the failure in terms
of failing sub-programs. In case a positive example fails, we identify failing
sub-programs at the granularity of literals. We introduce a failure explanation
algorithm based on analysing branches of SLD-trees. We integrate a
meta-interpreter based implementation of this algorithm with the test-stage of
the Popper ILP system. We show that fine-grained failure analysis allows for
learning fine-grained constraints on the hypothesis space. Our experimental
results show that explaining failures can drastically reduce hypothesis space
exploration and learning times.
Related papers
- A Closer Look at the Self-Verification Abilities of Large Language Models in Logical Reasoning [73.77088902676306]
We take a closer look at the self-verification abilities of large language models (LLMs) in the context of logical reasoning.
Our main findings suggest that existing LLMs could struggle to identify fallacious reasoning steps accurately and may fall short of guaranteeing the validity of self-verification methods.
arXiv Detail & Related papers (2023-11-14T07:13:10Z) - On the Paradox of Learning to Reason from Data [86.13662838603761]
We show that BERT can attain near-perfect accuracy on in-distribution test examples while failing to generalize to other data distributions over the exact same problem space.
Our study provides an explanation for this paradox: instead of learning to emulate the correct reasoning function, BERT has in fact learned statistical features that inherently exist in logical reasoning problems.
arXiv Detail & Related papers (2022-05-23T17:56:48Z) - Visual Abductive Reasoning [85.17040703205608]
Abductive reasoning seeks the likeliest possible explanation for partial observations.
We propose a new task and dataset, Visual Abductive Reasoning ( VAR), for examining abductive reasoning ability of machine intelligence in everyday visual situations.
arXiv Detail & Related papers (2022-03-26T10:17:03Z) - Learning logic programs by discovering where not to search [18.27510863075184]
We introduce an approach that, before searching for a hypothesis, first discovers where not to search'
We use given BK to discover constraints on hypotheses, such as that a number cannot be both even and odd.
Our experiments on multiple domains show that our approach can substantially reduce learning times.
arXiv Detail & Related papers (2022-02-20T12:32:03Z) - Observing Interventions: A logic for thinking about experiments [62.997667081978825]
This paper makes a first step towards a logic of learning from experiments.
Crucial for our approach is the idea that the notion of an intervention can be used as a formal expression of a (real or hypothetical) experiment.
For all the proposed logical systems, we provide a sound and complete axiomatization.
arXiv Detail & Related papers (2021-11-25T09:26:45Z) - Top Program Construction and Reduction for polynomial time
Meta-Interpretive Learning [8.680676599607125]
We show how an exponentially-growing search can be replaced by the construction of a Top program.
We implement our algorithm in Prolog as the basis of a new MIL system, Louise, that constructs a Top program.
We compare Louise to the state-of-the-art search-based MIL system Metagol in experiments on grid world navigation, graph connectedness and grammar learning datasets.
arXiv Detail & Related papers (2021-01-13T13:39:21Z) - Empirically Verifying Hypotheses Using Reinforcement Learning [58.09414653169534]
This paper formulates hypothesis verification as an RL problem.
We aim to build an agent that, given a hypothesis about the dynamics of the world, can take actions to generate observations which can help predict whether the hypothesis is true or false.
arXiv Detail & Related papers (2020-06-29T01:01:10Z) - Logic of Machine Learning [0.0]
I suggest that prediction requires belief in "predictability" of the underlying dependence.
I show on examples of many popular textbook learners that each of them minimizes its own version of incongruity.
arXiv Detail & Related papers (2020-06-16T20:25:41Z) - L2R2: Leveraging Ranking for Abductive Reasoning [65.40375542988416]
The abductive natural language inference task ($alpha$NLI) is proposed to evaluate the abductive reasoning ability of a learning system.
A novel $L2R2$ approach is proposed under the learning-to-rank framework.
Experiments on the ART dataset reach the state-of-the-art in the public leaderboard.
arXiv Detail & Related papers (2020-05-22T15:01:23Z) - Learning programs by learning from failures [26.955785230358963]
We describe an inductive logic programming (ILP) approach called learning from failures.
In this approach, an ILP system decomposes the learning problem into three separate stages: generate, test, and constrain.
We introduce Popper, an ILP system that implements this approach by combining answer set programming and Prolog.
arXiv Detail & Related papers (2020-05-05T14:55:07Z) - Learning large logic programs by going beyond entailment [18.27510863075184]
We implement our idea in Brute, a new ILP system which uses best-first search, guided by an example-dependent loss function, to incrementally build programs.
Our experiments show that Brute can substantially outperform existing ILP systems in terms of predictive accuracies and learning times.
arXiv Detail & Related papers (2020-04-21T09:31:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.