Learning programs by learning from failures
- URL: http://arxiv.org/abs/2005.02259v3
- Date: Wed, 25 Nov 2020 08:45:50 GMT
- Title: Learning programs by learning from failures
- Authors: Andrew Cropper and Rolf Morel
- Abstract summary: We describe an inductive logic programming (ILP) approach called learning from failures.
In this approach, an ILP system decomposes the learning problem into three separate stages: generate, test, and constrain.
We introduce Popper, an ILP system that implements this approach by combining answer set programming and Prolog.
- Score: 26.955785230358963
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We describe an inductive logic programming (ILP) approach called learning
from failures. In this approach, an ILP system (the learner) decomposes the
learning problem into three separate stages: generate, test, and constrain. In
the generate stage, the learner generates a hypothesis (a logic program) that
satisfies a set of hypothesis constraints (constraints on the syntactic form of
hypotheses). In the test stage, the learner tests the hypothesis against
training examples. A hypothesis fails when it does not entail all the positive
examples or entails a negative example. If a hypothesis fails, then, in the
constrain stage, the learner learns constraints from the failed hypothesis to
prune the hypothesis space, i.e. to constrain subsequent hypothesis generation.
For instance, if a hypothesis is too general (entails a negative example), the
constraints prune generalisations of the hypothesis. If a hypothesis is too
specific (does not entail all the positive examples), the constraints prune
specialisations of the hypothesis. This loop repeats until either (i) the
learner finds a hypothesis that entails all the positive and none of the
negative examples, or (ii) there are no more hypotheses to test. We introduce
Popper, an ILP system that implements this approach by combining answer set
programming and Prolog. Popper supports infinite problem domains, reasoning
about lists and numbers, learning textually minimal programs, and learning
recursive programs. Our experimental results on three domains (toy game
problems, robot strategies, and list transformations) show that (i) constraints
drastically improve learning performance, and (ii) Popper can outperform
existing ILP systems, both in terms of predictive accuracies and learning
times.
Related papers
- Computable learning of natural hypothesis classes [0.0]
Examples have recently been given of hypothesis classes which are PAC learnable but not computably PAC learnable.
We use the on-a-cone machinery from computability theory to prove that, under mild assumptions such as that the hypothesis class can be computably listable, any natural hypothesis class which is learnable must be computably learnable.
arXiv Detail & Related papers (2024-07-23T17:26:38Z) - Optimal Multi-Distribution Learning [88.3008613028333]
Multi-distribution learning seeks to learn a shared model that minimizes the worst-case risk across $k$ distinct data distributions.
We propose a novel algorithm that yields an varepsilon-optimal randomized hypothesis with a sample complexity on the order of (d+k)/varepsilon2.
arXiv Detail & Related papers (2023-12-08T16:06:29Z) - Increasing Probability Mass on Answer Choices Does Not Always Improve
Accuracy [60.18632773935895]
Spreading probability mass across multiple surface forms with identical meaning is thought to cause an underestimation of a model's true performance.
We propose a mathematical formalism for SFC which allows us to quantify and bound its impact for the first time.
We identify a simple method for reducing it -- namely, increasing probability mass on the given answer choices by a) including them in the prompt and b) using in-context learning with even just one example.
arXiv Detail & Related papers (2023-05-24T00:27:00Z) - Learning logic programs by discovering where not to search [18.27510863075184]
We introduce an approach that, before searching for a hypothesis, first discovers where not to search'
We use given BK to discover constraints on hypotheses, such as that a number cannot be both even and odd.
Our experiments on multiple domains show that our approach can substantially reduce learning times.
arXiv Detail & Related papers (2022-02-20T12:32:03Z) - Learning Logic Programs From Noisy Failures [0.0]
We introduce the relaxed learning from failures approach to ILP, a noise handling modification of the previously introduced learning from failures (LFF) approach.
We additionally introduce the novel Noisy Popper ILP system which implements this relaxed approach and is a modification of the existing Popper system.
arXiv Detail & Related papers (2021-12-28T16:48:00Z) - Observing Interventions: A logic for thinking about experiments [62.997667081978825]
This paper makes a first step towards a logic of learning from experiments.
Crucial for our approach is the idea that the notion of an intervention can be used as a formal expression of a (real or hypothetical) experiment.
For all the proposed logical systems, we provide a sound and complete axiomatization.
arXiv Detail & Related papers (2021-11-25T09:26:45Z) - Understanding the Under-Coverage Bias in Uncertainty Estimation [58.03725169462616]
quantile regression tends to emphunder-cover than the desired coverage level in reality.
We prove that quantile regression suffers from an inherent under-coverage bias.
Our theory reveals that this under-coverage bias stems from a certain high-dimensional parameter estimation error.
arXiv Detail & Related papers (2021-06-10T06:11:55Z) - Learning logic programs by explaining their failures [26.955785230358963]
We introduce failure explanation techniques for inductive logic programming.
If a hypothesis fails, we explain the failure in terms of failing sub-programs.
We show that fine-grained failure analysis allows for learning fine-grained constraints on the hypothesis space.
arXiv Detail & Related papers (2021-02-18T14:32:20Z) - Empirically Verifying Hypotheses Using Reinforcement Learning [58.09414653169534]
This paper formulates hypothesis verification as an RL problem.
We aim to build an agent that, given a hypothesis about the dynamics of the world, can take actions to generate observations which can help predict whether the hypothesis is true or false.
arXiv Detail & Related papers (2020-06-29T01:01:10Z) - L2R2: Leveraging Ranking for Abductive Reasoning [65.40375542988416]
The abductive natural language inference task ($alpha$NLI) is proposed to evaluate the abductive reasoning ability of a learning system.
A novel $L2R2$ approach is proposed under the learning-to-rank framework.
Experiments on the ART dataset reach the state-of-the-art in the public leaderboard.
arXiv Detail & Related papers (2020-05-22T15:01:23Z) - Learning large logic programs by going beyond entailment [18.27510863075184]
We implement our idea in Brute, a new ILP system which uses best-first search, guided by an example-dependent loss function, to incrementally build programs.
Our experiments show that Brute can substantially outperform existing ILP systems in terms of predictive accuracies and learning times.
arXiv Detail & Related papers (2020-04-21T09:31:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.