Learning MDL logic programs from noisy data
- URL: http://arxiv.org/abs/2308.09393v1
- Date: Fri, 18 Aug 2023 08:49:30 GMT
- Title: Learning MDL logic programs from noisy data
- Authors: C\'eline Hocquette, Andreas Niskanen, Matti J\"arvisalo, Andrew
Cropper
- Abstract summary: We introduce an approach that learns minimal description length programs from noisy data.
Our experiments on several domains, including drug design, game playing, and program synthesis, show that our approach can outperform existing approaches.
- Score: 19.749004264961492
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Many inductive logic programming approaches struggle to learn programs from
noisy data. To overcome this limitation, we introduce an approach that learns
minimal description length programs from noisy data, including recursive
programs. Our experiments on several domains, including drug design, game
playing, and program synthesis, show that our approach can outperform existing
approaches in terms of predictive accuracies and scale to moderate amounts of
noise.
Related papers
- Learning logic programs by finding minimal unsatisfiable subprograms [24.31242130341093]
We introduce an ILP approach that identifies minimal unsatisfiable subprograms (MUSPs)
Our experiments on multiple domains, including program synthesis and game playing, show that our approach can reduce learning times by 99%.
arXiv Detail & Related papers (2024-01-29T18:24:16Z) - Noise-Robust Fine-Tuning of Pretrained Language Models via External
Guidance [61.809732058101304]
We introduce an innovative approach for fine-tuning PLMs using noisy labels.
This approach incorporates the guidance of Large Language Models (LLMs) like ChatGPT.
This guidance assists in accurately distinguishing between clean and noisy samples.
arXiv Detail & Related papers (2023-11-02T09:20:38Z) - Turaco: Complexity-Guided Data Sampling for Training Neural Surrogates
of Programs [14.940174578659603]
We present a methodology for sampling datasets to train neural-network-based surrogates of programs.
We first characterize the proportion of data to sample from each region of a program's input space based on the complexity of learning a surrogate of the corresponding execution path.
We evaluate these results on a range of real-world programs, demonstrating that complexity-guided sampling results in empirical improvements in accuracy.
arXiv Detail & Related papers (2023-09-21T01:59:20Z) - Hierarchical Programmatic Reinforcement Learning via Learning to Compose
Programs [58.94569213396991]
We propose a hierarchical programmatic reinforcement learning framework to produce program policies.
By learning to compose programs, our proposed framework can produce program policies that describe out-of-distributionally complex behaviors.
The experimental results in the Karel domain show that our proposed framework outperforms baselines.
arXiv Detail & Related papers (2023-01-30T14:50:46Z) - From Perception to Programs: Regularize, Overparameterize, and Amortize [21.221244694737134]
We develop techniques for neurosymbolic program synthesis where perceptual input is first parsed by neural nets into a low-dimensional interpretable representation, which is then processed by a synthesized program.
We explore several techniques for relaxing the problem and jointly learning all modules end-to-end with gradient descent.
Collectedly this toolbox improves the stability of gradient-guided program search, and suggests ways of learning both how to perceive input as discrete abstractions, and how to symbolically process those abstractions as programs.
arXiv Detail & Related papers (2022-06-13T06:27:11Z) - Learning logic programs by combining programs [24.31242130341093]
We introduce an approach where we learn small non-separable programs and combine them.
We implement our approach in a constraint-driven ILP system.
Our experiments on multiple domains, including game playing and program synthesis, show that our approach can drastically outperform existing approaches.
arXiv Detail & Related papers (2022-06-01T10:07:37Z) - A Survey on Programmatic Weak Supervision [74.13976343129966]
We give brief introduction of the PWS learning paradigm and review representative approaches for each PWS's learning workflow.
We identify several critical challenges that remain underexplored in the area to hopefully inspire future directions in the field.
arXiv Detail & Related papers (2022-02-11T04:05:38Z) - Learning to Synthesize Programs as Interpretable and Generalizable
Policies [25.258598215642067]
We present a framework that learns to synthesize a program, which details the procedure to solve a task in a flexible and expressive manner.
Experimental results demonstrate that the proposed framework not only learns to reliably synthesize task-solving programs but also outperforms DRL and program synthesis baselines.
arXiv Detail & Related papers (2021-08-31T07:03:06Z) - BUSTLE: Bottom-Up Program Synthesis Through Learning-Guided Exploration [72.88493072196094]
We present a new synthesis approach that leverages learning to guide a bottom-up search over programs.
In particular, we train a model to prioritize compositions of intermediate values during search conditioned on a set of input-output examples.
We show that the combination of learning and bottom-up search is remarkably effective, even with simple supervised learning approaches.
arXiv Detail & Related papers (2020-07-28T17:46:18Z) - Can We Learn Heuristics For Graphical Model Inference Using
Reinforcement Learning? [114.24881214319048]
We show that we can learn programs, i.e., policies, for solving inference in higher order Conditional Random Fields (CRFs) using reinforcement learning.
Our method solves inference tasks efficiently without imposing any constraints on the form of the potentials.
arXiv Detail & Related papers (2020-04-27T19:24:04Z) - Creating Synthetic Datasets via Evolution for Neural Program Synthesis [77.34726150561087]
We show that some program synthesis approaches generalize poorly to data distributions different from that of the randomly generated examples.
We propose a new, adversarial approach to control the bias of synthetic data distributions and show that it outperforms current approaches.
arXiv Detail & Related papers (2020-03-23T18:34:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.