Preprocessing in Inductive Logic Programming
- URL: http://arxiv.org/abs/2112.12551v1
- Date: Tue, 21 Dec 2021 16:01:28 GMT
- Title: Preprocessing in Inductive Logic Programming
- Authors: Brad Hunter
- Abstract summary: This dissertation introduces bottom preprocessing, a method for generating initial constraints on an ILP system.
Bottom preprocessing applies ideas from inverse entailment to modern ILP systems.
It is shown experimentally that bottom preprocessing can reduce learning times of ILP systems on hard problems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Inductive logic programming is a type of machine learning in which logic
programs are learned from examples. This learning typically occurs relative to
some background knowledge provided as a logic program. This dissertation
introduces bottom preprocessing, a method for generating initial constraints on
the programs an ILP system must consider. Bottom preprocessing applies ideas
from inverse entailment to modern ILP systems. Inverse entailment is an
influential early ILP approach introduced with Progol. This dissertation also
presents $\bot$-Popper, an implementation of bottom preprocessing for the
modern ILP system Popper. It is shown experimentally that bottom preprocessing
can reduce learning times of ILP systems on hard problems. This reduction can
be especially significant when the amount of background knowledge in the
problem is large.
Related papers
- Soft Reasoning on Uncertain Knowledge Graphs [85.1968214421899]
We study the setting of soft queries on uncertain knowledge, which is motivated by the establishment of soft constraint programming.
We propose an ML-based approach with both forward inference and backward calibration to answer soft queries on large-scale, incomplete, and uncertain knowledge graphs.
arXiv Detail & Related papers (2024-03-03T13:13:53Z) - System Predictor: Grounding Size Estimator for Logic Programs under
Answer Set Semantics [0.5801044612920815]
We present the system Predictor for estimating the grounding size of programs.
We evaluate the impact of Predictor when used as a guide for rewritings produced by the answer set programming rewriting tools Projector and Lpopt.
arXiv Detail & Related papers (2023-03-29T20:49:40Z) - Enforcing Consistency in Weakly Supervised Semantic Parsing [68.2211621631765]
We explore the use of consistency between the output programs for related inputs to reduce the impact of spurious programs.
We find that a more consistent formalism leads to improved model performance even without consistency-based training.
arXiv Detail & Related papers (2021-07-13T03:48:04Z) - How could Neural Networks understand Programs? [67.4217527949013]
It is difficult to build a model to better understand programs, by either directly applying off-the-shelf NLP pre-training techniques to the source code, or adding features to the model by theshelf.
We propose a novel program semantics learning paradigm, that the model should learn from information composed of (1) the representations which align well with the fundamental operations in operational semantics, and (2) the information of environment transition.
arXiv Detail & Related papers (2021-05-10T12:21:42Z) - Data-driven Weight Initialization with Sylvester Solvers [72.11163104763071]
We propose a data-driven scheme to initialize the parameters of a deep neural network.
We show that our proposed method is especially effective in few-shot and fine-tuning settings.
arXiv Detail & Related papers (2021-05-02T07:33:16Z) - Inductive logic programming at 30: a new introduction [18.27510863075184]
Inductive logic programming (ILP) is a form of machine learning.
This paper introduces the necessary logical notation and the main learning settings.
We also describe the building blocks of an ILP system and compare several systems.
arXiv Detail & Related papers (2020-08-18T13:09:25Z) - Process Discovery for Structured Program Synthesis [70.29027202357385]
A core task in process mining is process discovery which aims to learn an accurate process model from event log data.
In this paper, we propose to use (block-) structured programs directly as target process models.
We develop a novel bottom-up agglomerative approach to the discovery of such structured program process models.
arXiv Detail & Related papers (2020-08-13T10:33:10Z) - Incremental maintenance of overgrounded logic programs with tailored
simplifications [0.966840768820136]
We introduce a new strategy for generating series of monotonically growing propositional programs.
With respect to earlier approaches, our tailored simplification technique reduces the size of instantiated programs.
arXiv Detail & Related papers (2020-08-06T21:50:11Z) - The ILASP system for Inductive Learning of Answer Set Programs [79.41112438865386]
Our system learns Answer Set Programs, including normal rules, choice rules and hard and weak constraints.
We first give a general overview of ILASP's learning framework and its capabilities.
This is followed by a comprehensive summary of the evolution of the ILASP system.
arXiv Detail & Related papers (2020-05-02T19:04:12Z) - Learning large logic programs by going beyond entailment [18.27510863075184]
We implement our idea in Brute, a new ILP system which uses best-first search, guided by an example-dependent loss function, to incrementally build programs.
Our experiments show that Brute can substantially outperform existing ILP systems in terms of predictive accuracies and learning times.
arXiv Detail & Related papers (2020-04-21T09:31:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.