Learning Logical Rules using Minimum Message Length
- URL: http://arxiv.org/abs/2508.06230v1
- Date: Fri, 08 Aug 2025 11:23:58 GMT
- Title: Learning Logical Rules using Minimum Message Length
- Authors: Ruben Sharma, Sebastijan Dumančić, Ross D. King, Andrew Cropper,
- Abstract summary: We introduce a Bayesian inductive logic programming approach that learns minimum message length programs from noisy data.<n>Our experiments on several domains, including game playing and drug design, show that our method significantly outperforms previous methods.
- Score: 16.761879277169367
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unifying probabilistic and logical learning is a key challenge in AI. We introduce a Bayesian inductive logic programming approach that learns minimum message length programs from noisy data. Our approach balances hypothesis complexity and data fit through priors, which explicitly favour more general programs, and a likelihood that favours accurate programs. Our experiments on several domains, including game playing and drug design, show that our method significantly outperforms previous methods, notably those that learn minimum description length programs. Our results also show that our approach is data-efficient and insensitive to example balance, including the ability to learn from exclusively positive examples.
Related papers
- Inference-Time Computations for LLM Reasoning and Planning: A Benchmark and Insights [49.42133807824413]
We examine the reasoning and planning capabilities of large language models (LLMs) in solving complex tasks.<n>Recent advances in inference-time techniques demonstrate the potential to enhance LLM reasoning without additional training.<n>OpenAI's o1 model shows promising performance through its novel use of multi-step reasoning and verification.
arXiv Detail & Related papers (2025-02-18T04:11:29Z) - Learning logic programs by finding minimal unsatisfiable subprograms [24.31242130341093]
We introduce an ILP approach that identifies minimal unsatisfiable subprograms (MUSPs)
Our experiments on multiple domains, including program synthesis and game playing, show that our approach can reduce learning times by 99%.
arXiv Detail & Related papers (2024-01-29T18:24:16Z) - Learning MDL logic programs from noisy data [19.749004264961492]
We introduce an approach that learns minimal description length programs from noisy data.
Our experiments on several domains, including drug design, game playing, and program synthesis, show that our approach can outperform existing approaches.
arXiv Detail & Related papers (2023-08-18T08:49:30Z) - Hierarchical Programmatic Reinforcement Learning via Learning to Compose
Programs [58.94569213396991]
We propose a hierarchical programmatic reinforcement learning framework to produce program policies.
By learning to compose programs, our proposed framework can produce program policies that describe out-of-distributionally complex behaviors.
The experimental results in the Karel domain show that our proposed framework outperforms baselines.
arXiv Detail & Related papers (2023-01-30T14:50:46Z) - Generalisation Through Negation and Predicate Invention [25.944127431156627]
We introduce an inductive logic programming (ILP) approach that combines negation and predicate invention.
We implement our idea in NOPI, which can learn normal logic programs with predicate invention.
Our experimental results on multiple domains show that our approach can improve predictive accuracies and learning times.
arXiv Detail & Related papers (2023-01-18T16:12:27Z) - A Survey of Learning on Small Data: Generalization, Optimization, and
Challenge [101.27154181792567]
Learning on small data that approximates the generalization ability of big data is one of the ultimate purposes of AI.
This survey follows the active sampling theory under a PAC framework to analyze the generalization error and label complexity of learning on small data.
Multiple data applications that may benefit from efficient small data representation are surveyed.
arXiv Detail & Related papers (2022-07-29T02:34:19Z) - Learning logic programs by combining programs [24.31242130341093]
We introduce an approach where we learn small non-separable programs and combine them.
We implement our approach in a constraint-driven ILP system.
Our experiments on multiple domains, including game playing and program synthesis, show that our approach can drastically outperform existing approaches.
arXiv Detail & Related papers (2022-06-01T10:07:37Z) - Learning from Self-Sampled Correct and Partially-Correct Programs [96.66452896657991]
We propose to let the model perform sampling during training and learn from both self-sampled fully-correct programs and partially-correct programs.
We show that our use of self-sampled correct and partially-correct programs can benefit learning and help guide the sampling process.
Our proposed method improves the pass@k performance by 3.1% to 12.3% compared to learning from a single reference program with MLE.
arXiv Detail & Related papers (2022-05-28T03:31:07Z) - Learning to Synthesize Programs as Interpretable and Generalizable
Policies [25.258598215642067]
We present a framework that learns to synthesize a program, which details the procedure to solve a task in a flexible and expressive manner.
Experimental results demonstrate that the proposed framework not only learns to reliably synthesize task-solving programs but also outperforms DRL and program synthesis baselines.
arXiv Detail & Related papers (2021-08-31T07:03:06Z) - Enforcing Consistency in Weakly Supervised Semantic Parsing [68.2211621631765]
We explore the use of consistency between the output programs for related inputs to reduce the impact of spurious programs.
We find that a more consistent formalism leads to improved model performance even without consistency-based training.
arXiv Detail & Related papers (2021-07-13T03:48:04Z) - Learning Implicitly with Noisy Data in Linear Arithmetic [94.66549436482306]
We extend implicit learning in PAC-Semantics to handle intervals and threshold uncertainty in the language of linear arithmetic.
We show that our implicit approach to learning optimal linear programming objective constraints significantly outperforms an explicit approach in practice.
arXiv Detail & Related papers (2020-10-23T19:08:46Z) - BUSTLE: Bottom-Up Program Synthesis Through Learning-Guided Exploration [72.88493072196094]
We present a new synthesis approach that leverages learning to guide a bottom-up search over programs.
In particular, we train a model to prioritize compositions of intermediate values during search conditioned on a set of input-output examples.
We show that the combination of learning and bottom-up search is remarkably effective, even with simple supervised learning approaches.
arXiv Detail & Related papers (2020-07-28T17:46:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.