Truly Unordered Probabilistic Rule Sets for Multi-class Classification
- URL: http://arxiv.org/abs/2206.08804v1
- Date: Fri, 17 Jun 2022 14:34:35 GMT
- Title: Truly Unordered Probabilistic Rule Sets for Multi-class Classification
- Authors: Lincen Yang, Matthijs van Leeuwen
- Abstract summary: We propose TURS, for Truly Unordered Rule Sets.
We first formalise the problem of learning truly unordered rule sets.
We then develop a two-phase algorithm that learns rule sets by carefully growing rules.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rule set learning has long been studied and has recently been frequently
revisited due to the need for interpretable models. Still, existing methods
have several shortcomings: 1) most recent methods require a binary feature
matrix as input, learning rules directly from numeric variables is
understudied; 2) existing methods impose orders among rules, either explicitly
or implicitly, which harms interpretability; and 3) currently no method exists
for learning probabilistic rule sets for multi-class target variables (there is
only a method for probabilistic rule lists).
We propose TURS, for Truly Unordered Rule Sets, which addresses these
shortcomings. We first formalise the problem of learning truly unordered rule
sets. To resolve conflicts caused by overlapping rules, i.e., instances covered
by multiple rules, we propose a novel approach that exploits the probabilistic
properties of our rule sets. We next develop a two-phase heuristic algorithm
that learns rule sets by carefully growing rules. An important innovation is
that we use a surrogate score to take the global potential of the rule set into
account when learning a local rule.
Finally, we empirically demonstrate that, compared to non-probabilistic and
(explicitly or implicitly) ordered state-of-the-art methods, our method learns
rule sets that not only have better interpretability (i.e., they are smaller
and truly unordered), but also better predictive performance.
Related papers
- Neuro-Symbolic Rule Lists [31.085257698392354]
NeuRules is an end-to-end trainable model that unifies discretization, rule learning, and rule order into a single framework.
We show that NeuRules consistently outperforms neuro-symbolic methods, effectively learning simple and complex rules, as well as their order, across a wide range of datasets.
arXiv Detail & Related papers (2024-11-10T11:10:36Z) - Probabilistic Truly Unordered Rule Sets [4.169915659794567]
We propose TURS, for Truly Unordered Rule Sets.
We exploit the probabilistic properties of our rule sets, with the intuition of only allowing rules to overlap if they have similar probabilistic outputs.
We benchmark against a wide range of rule-based methods and demonstrate that our method learns rule sets that have lower model complexity and highly competitive predictive performance.
arXiv Detail & Related papers (2024-01-18T12:03:19Z) - Large Language Models can Learn Rules [106.40747309894236]
We present Hypotheses-to-Theories (HtT), a framework that learns a rule library for reasoning with large language models (LLMs)
Experiments on relational reasoning, numerical reasoning and concept learning problems show that HtT improves existing prompting methods.
The learned rules are also transferable to different models and to different forms of the same problem.
arXiv Detail & Related papers (2023-10-10T23:07:01Z) - Learning Locally Interpretable Rule Ensemble [2.512827436728378]
A rule ensemble is an interpretable model based on the linear combination of weighted rules.
This paper proposes a new framework for learning a rule ensemble model that is both accurate and interpretable.
arXiv Detail & Related papers (2023-06-20T12:06:56Z) - Efficient learning of large sets of locally optimal classification rules [0.0]
Conventional rule learning algorithms aim at finding a set of simple rules, where each rule covers as many examples as possible.
In this paper, we argue that the rules found in this way may not be the optimal explanations for each of the examples they cover.
We propose an efficient algorithm that aims at finding the best rule covering each training example in a greedy optimization consisting of one specialization and one generalization loop.
arXiv Detail & Related papers (2023-01-24T11:40:28Z) - Machine Learning with Probabilistic Law Discovery: A Concise
Introduction [77.34726150561087]
Probabilistic Law Discovery (PLD) is a logic based Machine Learning method, which implements a variant of probabilistic rule learning.
PLD is close to Decision Tree/Random Forest methods, but it differs significantly in how relevant rules are defined.
This paper outlines the main principles of PLD, highlight its benefits and limitations and provide some application guidelines.
arXiv Detail & Related papers (2022-12-22T17:40:13Z) - Differentiable Rule Induction with Learned Relational Features [9.193818627108572]
Rule Network (RRN) is a neural architecture that learns predicates that represent a linear relationship among attributes along with the rules that use them.
On benchmark tasks we show that these predicates are simple enough to retain interpretability, yet improve prediction accuracy and provide sets of rules that are more concise compared to state of the art rule induction algorithms.
arXiv Detail & Related papers (2022-01-17T16:46:50Z) - Learning Symbolic Rules for Reasoning in Quasi-Natural Language [74.96601852906328]
We build a rule-based system that can reason with natural language input but without the manual construction of rules.
We propose MetaQNL, a "Quasi-Natural" language that can express both formal logic and natural language sentences.
Our approach achieves state-of-the-art accuracy on multiple reasoning benchmarks.
arXiv Detail & Related papers (2021-11-23T17:49:00Z) - Discovering Useful Compact Sets of Sequential Rules in a Long Sequence [57.684967309375274]
COSSU is an algorithm to mine small and meaningful sets of sequential rules.
We show that COSSU can successfully retrieve relevant sets of closed sequential rules from a long sequence.
arXiv Detail & Related papers (2021-09-15T18:25:18Z) - RNNLogic: Learning Logic Rules for Reasoning on Knowledge Graphs [91.71504177786792]
This paper studies learning logic rules for reasoning on knowledge graphs.
Logic rules provide interpretable explanations when used for prediction as well as being able to generalize to other tasks.
Existing methods either suffer from the problem of searching in a large search space or ineffective optimization due to sparse rewards.
arXiv Detail & Related papers (2020-10-08T14:47:02Z) - Meta-learning with Stochastic Linear Bandits [120.43000970418939]
We consider a class of bandit algorithms that implement a regularized version of the well-known OFUL algorithm, where the regularization is a square euclidean distance to a bias vector.
We show both theoretically and experimentally, that when the number of tasks grows and the variance of the task-distribution is small, our strategies have a significant advantage over learning the tasks in isolation.
arXiv Detail & Related papers (2020-05-18T08:41:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.