Learning big logical rules by joining small rules
- URL: http://arxiv.org/abs/2401.16215v1
- Date: Mon, 29 Jan 2024 15:09:40 GMT
- Title: Learning big logical rules by joining small rules
- Authors: C\'eline Hocquette and Andreas Niskanen and Rolf Morel and Matti
J\"arvisalo and Andrew Cropper
- Abstract summary: We implement our approach in a constraint-driven system and use constraint solvers to efficiently join rules.
Our experiments on many domains, including game playing and drug design, show that our approach can learn rules with more than 100 literals.
- Score: 21.45295555809529
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: A major challenge in inductive logic programming is learning big rules. To
address this challenge, we introduce an approach where we join small rules to
learn big rules. We implement our approach in a constraint-driven system and
use constraint solvers to efficiently join rules. Our experiments on many
domains, including game playing and drug design, show that our approach can (i)
learn rules with more than 100 literals, and (ii) drastically outperform
existing approaches in terms of predictive accuracies.
Related papers
- LogicGame: Benchmarking Rule-Based Reasoning Abilities of Large Language Models [87.49676980090555]
Large Language Models (LLMs) have demonstrated notable capabilities across various tasks, showcasing complex problem-solving abilities.
We introduce LogicGame, a novel benchmark designed to evaluate the comprehensive rule understanding, execution, and planning capabilities of LLMs.
arXiv Detail & Related papers (2024-08-28T13:16:41Z) - Symbolic Working Memory Enhances Language Models for Complex Rule Application [87.34281749422756]
Large Language Models (LLMs) have shown remarkable reasoning performance but struggle with multi-step deductive reasoning.
We propose augmenting LLMs with external working memory and introduce a neurosymbolic framework for rule application.
Our framework iteratively performs symbolic rule grounding and LLM-based rule implementation.
arXiv Detail & Related papers (2024-08-24T19:11:54Z) - Can LLMs Reason with Rules? Logic Scaffolding for Stress-Testing and Improving LLMs [87.34281749422756]
Large language models (LLMs) have achieved impressive human-like performance across various reasoning tasks.
However, their mastery of underlying inferential rules still falls short of human capabilities.
We propose a logic scaffolding inferential rule generation framework, to construct an inferential rule base, ULogic.
arXiv Detail & Related papers (2024-02-18T03:38:51Z) - Probabilistic Truly Unordered Rule Sets [4.169915659794567]
We propose TURS, for Truly Unordered Rule Sets.
We exploit the probabilistic properties of our rule sets, with the intuition of only allowing rules to overlap if they have similar probabilistic outputs.
We benchmark against a wide range of rule-based methods and demonstrate that our method learns rule sets that have lower model complexity and highly competitive predictive performance.
arXiv Detail & Related papers (2024-01-18T12:03:19Z) - ChatRule: Mining Logical Rules with Large Language Models for Knowledge
Graph Reasoning [107.61997887260056]
We propose a novel framework, ChatRule, unleashing the power of large language models for mining logical rules over knowledge graphs.
Specifically, the framework is initiated with an LLM-based rule generator, leveraging both the semantic and structural information of KGs.
To refine the generated rules, a rule ranking module estimates the rule quality by incorporating facts from existing KGs.
arXiv Detail & Related papers (2023-09-04T11:38:02Z) - On the Aggregation of Rules for Knowledge Graph Completion [9.628032156001069]
Rule learning approaches for knowledge graph completion are efficient, interpretable and competitive to purely neural models.
We show that existing aggregation approaches can be expressed as marginal inference operations over the predicting rules.
We propose an efficient and overlooked baseline which combines the previous strategies and is competitive to computationally more expensive approaches.
arXiv Detail & Related papers (2023-09-01T07:32:11Z) - Machine Learning with Probabilistic Law Discovery: A Concise
Introduction [77.34726150561087]
Probabilistic Law Discovery (PLD) is a logic based Machine Learning method, which implements a variant of probabilistic rule learning.
PLD is close to Decision Tree/Random Forest methods, but it differs significantly in how relevant rules are defined.
This paper outlines the main principles of PLD, highlight its benefits and limitations and provide some application guidelines.
arXiv Detail & Related papers (2022-12-22T17:40:13Z) - Truly Unordered Probabilistic Rule Sets for Multi-class Classification [0.0]
We propose TURS, for Truly Unordered Rule Sets.
We first formalise the problem of learning truly unordered rule sets.
We then develop a two-phase algorithm that learns rule sets by carefully growing rules.
arXiv Detail & Related papers (2022-06-17T14:34:35Z) - Learning logic programs by combining programs [24.31242130341093]
We introduce an approach where we learn small non-separable programs and combine them.
We implement our approach in a constraint-driven ILP system.
Our experiments on multiple domains, including game playing and program synthesis, show that our approach can drastically outperform existing approaches.
arXiv Detail & Related papers (2022-06-01T10:07:37Z) - RNNLogic: Learning Logic Rules for Reasoning on Knowledge Graphs [91.71504177786792]
This paper studies learning logic rules for reasoning on knowledge graphs.
Logic rules provide interpretable explanations when used for prediction as well as being able to generalize to other tasks.
Existing methods either suffer from the problem of searching in a large search space or ineffective optimization due to sparse rewards.
arXiv Detail & Related papers (2020-10-08T14:47:02Z) - Building Rule Hierarchies for Efficient Logical Rule Learning from
Knowledge Graphs [20.251630903853016]
We propose new methods for pruning unpromising rules using rule hierarchies.
We show that the application of HPMs is effective in removing unpromising rules.
arXiv Detail & Related papers (2020-06-29T16:33:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.