Learning Logic Programs From Noisy Failures
- URL: http://arxiv.org/abs/2201.03702v1
- Date: Tue, 28 Dec 2021 16:48:00 GMT
- Title: Learning Logic Programs From Noisy Failures
- Authors: John Wahlig
- Abstract summary: We introduce the relaxed learning from failures approach to ILP, a noise handling modification of the previously introduced learning from failures (LFF) approach.
We additionally introduce the novel Noisy Popper ILP system which implements this relaxed approach and is a modification of the existing Popper system.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inductive Logic Programming (ILP) is a form of machine learning (ML) which in
contrast to many other state of the art ML methods typically produces highly
interpretable and reusable models. However, many ILP systems lack the ability
to naturally learn from any noisy or partially misclassified training data. We
introduce the relaxed learning from failures approach to ILP, a noise handling
modification of the previously introduced learning from failures (LFF) approach
which is incapable of handling noise. We additionally introduce the novel Noisy
Popper ILP system which implements this relaxed approach and is a modification
of the existing Popper system. Like Popper, Noisy Popper takes a
generate-test-constrain loop to search its hypothesis space wherein failed
hypotheses are used to construct hypothesis constraints. These constraints are
used to prune the hypothesis space, making the hypothesis search more
efficient. However, in the relaxed setting, constraints are generated in a more
lax fashion as to avoid allowing noisy training data to lead to hypothesis
constraints which prune optimal hypotheses. Constraints unique to the relaxed
setting are generated via hypothesis comparison. Additional constraints are
generated by weighing the accuracy of hypotheses against their sizes to avoid
overfitting through an application of the minimum description length. We
support this new setting through theoretical proofs as well as experimental
results which suggest that Noisy Popper improves the noise handling
capabilities of Popper but at the cost of overall runtime efficiency.
Related papers
- Fast Semisupervised Unmixing Using Nonconvex Optimization [80.11512905623417]
We introduce a novel convex convex model for semi/library-based unmixing.
We demonstrate the efficacy of Alternating Methods of sparse unsupervised unmixing.
arXiv Detail & Related papers (2024-01-23T10:07:41Z) - Optimal Multi-Distribution Learning [88.3008613028333]
Multi-distribution learning seeks to learn a shared model that minimizes the worst-case risk across $k$ distinct data distributions.
We propose a novel algorithm that yields an varepsilon-optimal randomized hypothesis with a sample complexity on the order of (d+k)/varepsilon2.
arXiv Detail & Related papers (2023-12-08T16:06:29Z) - Calibrating Neural Simulation-Based Inference with Differentiable
Coverage Probability [50.44439018155837]
We propose to include a calibration term directly into the training objective of the neural model.
By introducing a relaxation of the classical formulation of calibration error we enable end-to-end backpropagation.
It is directly applicable to existing computational pipelines allowing reliable black-box posterior inference.
arXiv Detail & Related papers (2023-10-20T10:20:45Z) - Limitations of probabilistic error cancellation for open dynamics beyond
sampling overhead [1.1864834557465163]
Methods such as probabilistic error cancellation rely on discretizing the evolution into finite time steps and applying the mitigation layer after each time step.
This may lead to Trotter-like errors in the simulation results even if the error mitigation is implemented ideally.
We show that, they are determined by the commutating relations between the superoperators of the unitary part, the device noise part and the noise part of the open dynamics to be simulated.
arXiv Detail & Related papers (2023-08-02T21:45:06Z) - Label Noise: Correcting the Forward-Correction [0.0]
Training neural network classifiers on datasets with label noise poses a risk of overfitting them to the noisy labels.
We propose an approach to tackling overfitting caused by label noise.
Motivated by this observation, we propose imposing a lower bound on the training loss to mitigate overfitting.
arXiv Detail & Related papers (2023-07-24T19:41:19Z) - Learnability, Sample Complexity, and Hypothesis Class Complexity for
Regression Models [10.66048003460524]
This work is inspired by the foundation of PAC and is motivated by the existing regression learning issues.
The proposed approach, denoted by epsilon-Confidence Approximately Correct (epsilon CoAC), utilizes Kullback Leibler divergence (relative entropy)
It enables the learner to compare hypothesis classes of different complexity orders and choose among them the optimum with the minimum epsilon.
arXiv Detail & Related papers (2023-03-28T15:59:12Z) - Latent Class-Conditional Noise Model [54.56899309997246]
We introduce a Latent Class-Conditional Noise model (LCCN) to parameterize the noise transition under a Bayesian framework.
We then deduce a dynamic label regression method for LCCN, whose Gibbs sampler allows us efficiently infer the latent true labels.
Our approach safeguards the stable update of the noise transition, which avoids previous arbitrarily tuning from a mini-batch of samples.
arXiv Detail & Related papers (2023-02-19T15:24:37Z) - ShiftDDPMs: Exploring Conditional Diffusion Models by Shifting Diffusion
Trajectories [144.03939123870416]
We propose a novel conditional diffusion model by introducing conditions into the forward process.
We use extra latent space to allocate an exclusive diffusion trajectory for each condition based on some shifting rules.
We formulate our method, which we call textbfShiftDDPMs, and provide a unified point of view on existing related methods.
arXiv Detail & Related papers (2023-02-05T12:48:21Z) - The Optimal Noise in Noise-Contrastive Learning Is Not What You Think [80.07065346699005]
We show that deviating from this assumption can actually lead to better statistical estimators.
In particular, the optimal noise distribution is different from the data's and even from a different family.
arXiv Detail & Related papers (2022-03-02T13:59:20Z) - Square Root Principal Component Pursuit: Tuning-Free Noisy Robust Matrix
Recovery [8.581512812219737]
We propose a new framework for low-rank matrix recovery from observations corrupted with noise and outliers.
Inspired by the square root Lasso, this new formulation does not require prior knowledge of the noise level.
We show that a single, universal choice of the regularization parameter suffices to achieve reconstruction error proportional to the (a priori unknown) noise level.
arXiv Detail & Related papers (2021-06-17T02:28:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.