Efficient rule induction by ignoring pointless rules
- URL: http://arxiv.org/abs/2502.01232v1
- Date: Mon, 03 Feb 2025 10:46:18 GMT
- Title: Efficient rule induction by ignoring pointless rules
- Authors: Andrew Cropper, David M. Cerna,
- Abstract summary: We introduce an ILP approach that identifies pointless rules.<n>A rule is pointless if it contains a redundant literal or cannot discriminate against negative examples.<n>We show that ignoring pointless rules allows an ILP system to soundly prune the hypothesis space.
- Score: 21.961097463200232
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The goal of inductive logic programming (ILP) is to find a set of logical rules that generalises training examples and background knowledge. We introduce an ILP approach that identifies pointless rules. A rule is pointless if it contains a redundant literal or cannot discriminate against negative examples. We show that ignoring pointless rules allows an ILP system to soundly prune the hypothesis space. Our experiments on multiple domains, including visual reasoning and game playing, show that our approach can reduce learning times by 99% whilst maintaining predictive accuracies.
Related papers
- Honey, I shrunk the hypothesis space (through logical preprocessing) [19.54008511592332]
We introduce an approach that'shrinks' the hypothesis space before an ILP system searches it.<n>Our approach uses background knowledge to find rules that cannot be in an optimal hypothesis regardless of the training examples.<n>Our experiments show that our approach can substantially reduce learning times whilst maintaining predictive accuracies.
arXiv Detail & Related papers (2025-06-07T09:53:02Z) - Can LLMs Reason with Rules? Logic Scaffolding for Stress-Testing and Improving LLMs [87.34281749422756]
Large language models (LLMs) have achieved impressive human-like performance across various reasoning tasks.
However, their mastery of underlying inferential rules still falls short of human capabilities.
We propose a logic scaffolding inferential rule generation framework, to construct an inferential rule base, ULogic.
arXiv Detail & Related papers (2024-02-18T03:38:51Z) - Learning big logical rules by joining small rules [21.45295555809529]
We implement our approach in a constraint-driven system and use constraint solvers to efficiently join rules.
Our experiments on many domains, including game playing and drug design, show that our approach can learn rules with more than 100 literals.
arXiv Detail & Related papers (2024-01-29T15:09:40Z) - Large Language Models can Learn Rules [106.40747309894236]
We present Hypotheses-to-Theories (HtT), a framework that learns a rule library for reasoning with large language models (LLMs)<n> Experiments on relational reasoning, numerical reasoning and concept learning problems show that HtT improves existing prompting methods.<n>The learned rules are also transferable to different models and to different forms of the same problem.
arXiv Detail & Related papers (2023-10-10T23:07:01Z) - Abductive Commonsense Reasoning Exploiting Mutually Exclusive
Explanations [118.0818807474809]
Abductive reasoning aims to find plausible explanations for an event.
Existing approaches for abductive reasoning in natural language processing often rely on manually generated annotations for supervision.
This work proposes an approach for abductive commonsense reasoning that exploits the fact that only a subset of explanations is correct for a given context.
arXiv Detail & Related papers (2023-05-24T01:35:10Z) - Machine Learning with Probabilistic Law Discovery: A Concise
Introduction [77.34726150561087]
Probabilistic Law Discovery (PLD) is a logic based Machine Learning method, which implements a variant of probabilistic rule learning.
PLD is close to Decision Tree/Random Forest methods, but it differs significantly in how relevant rules are defined.
This paper outlines the main principles of PLD, highlight its benefits and limitations and provide some application guidelines.
arXiv Detail & Related papers (2022-12-22T17:40:13Z) - Truly Unordered Probabilistic Rule Sets for Multi-class Classification [0.0]
We propose TURS, for Truly Unordered Rule Sets.
We first formalise the problem of learning truly unordered rule sets.
We then develop a two-phase algorithm that learns rule sets by carefully growing rules.
arXiv Detail & Related papers (2022-06-17T14:34:35Z) - Learning Symbolic Rules for Reasoning in Quasi-Natural Language [74.96601852906328]
We build a rule-based system that can reason with natural language input but without the manual construction of rules.
We propose MetaQNL, a "Quasi-Natural" language that can express both formal logic and natural language sentences.
Our approach achieves state-of-the-art accuracy on multiple reasoning benchmarks.
arXiv Detail & Related papers (2021-11-23T17:49:00Z) - Open Rule Induction [2.1248439796866228]
Language model (LM)-based rule generation are proposed to enhance the expressive power of the rules.
We argue that, while KB-based methods inducted rules by discovering data commonalities, the current LM-based methods are "learning rules from rules"
In this paper, we propose the open rule induction problem, which aims to induce open rules utilizing the knowledge in LMs.
arXiv Detail & Related papers (2021-10-26T11:20:24Z) - An Exploration And Validation of Visual Factors in Understanding
Classification Rule Sets [21.659381756612866]
Rule sets are often used in Machine Learning (ML) as a way to communicate the model logic in settings where transparency and intelligibility are necessary.
Surprisingly, to date there has been limited work on exploring visual alternatives for presenting rules.
This work can help practitioners employ more effective solutions when using rules as a communication strategy to understand ML models.
arXiv Detail & Related papers (2021-09-19T16:33:16Z) - RNNLogic: Learning Logic Rules for Reasoning on Knowledge Graphs [91.71504177786792]
This paper studies learning logic rules for reasoning on knowledge graphs.
Logic rules provide interpretable explanations when used for prediction as well as being able to generalize to other tasks.
Existing methods either suffer from the problem of searching in a large search space or ineffective optimization due to sparse rewards.
arXiv Detail & Related papers (2020-10-08T14:47:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.