Probabilistic Truly Unordered Rule Sets
- URL: http://arxiv.org/abs/2401.09918v1
- Date: Thu, 18 Jan 2024 12:03:19 GMT
- Title: Probabilistic Truly Unordered Rule Sets
- Authors: Lincen Yang, Matthijs van Leeuwen
- Abstract summary: We propose TURS, for Truly Unordered Rule Sets.
We exploit the probabilistic properties of our rule sets, with the intuition of only allowing rules to overlap if they have similar probabilistic outputs.
We benchmark against a wide range of rule-based methods and demonstrate that our method learns rule sets that have lower model complexity and highly competitive predictive performance.
- Score: 4.169915659794567
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Rule set learning has recently been frequently revisited because of its
interpretability. Existing methods have several shortcomings though. First,
most existing methods impose orders among rules, either explicitly or
implicitly, which makes the models less comprehensible. Second, due to the
difficulty of handling conflicts caused by overlaps (i.e., instances covered by
multiple rules), existing methods often do not consider probabilistic rules.
Third, learning classification rules for multi-class target is understudied, as
most existing methods focus on binary classification or multi-class
classification via the ``one-versus-rest" approach.
To address these shortcomings, we propose TURS, for Truly Unordered Rule
Sets. To resolve conflicts caused by overlapping rules, we propose a novel
model that exploits the probabilistic properties of our rule sets, with the
intuition of only allowing rules to overlap if they have similar probabilistic
outputs. We next formalize the problem of learning a TURS model based on the
MDL principle and develop a carefully designed heuristic algorithm. We
benchmark against a wide range of rule-based methods and demonstrate that our
method learns rule sets that have lower model complexity and highly competitive
predictive performance. In addition, we empirically show that rules in our
model are empirically ``independent" and hence truly unordered.
Related papers
- Learning with Complementary Labels Revisited: The Selected-Completely-at-Random Setting Is More Practical [66.57396042747706]
Complementary-label learning is a weakly supervised learning problem.
We propose a consistent approach that does not rely on the uniform distribution assumption.
We find that complementary-label learning can be expressed as a set of negative-unlabeled binary classification problems.
arXiv Detail & Related papers (2023-11-27T02:59:17Z) - On Regularization and Inference with Label Constraints [62.60903248392479]
We compare two strategies for encoding label constraints in a machine learning pipeline, regularization with constraints and constrained inference.
For regularization, we show that it narrows the generalization gap by precluding models that are inconsistent with the constraints.
For constrained inference, we show that it reduces the population risk by correcting a model's violation, and hence turns the violation into an advantage.
arXiv Detail & Related papers (2023-07-08T03:39:22Z) - Learning Locally Interpretable Rule Ensemble [2.512827436728378]
A rule ensemble is an interpretable model based on the linear combination of weighted rules.
This paper proposes a new framework for learning a rule ensemble model that is both accurate and interpretable.
arXiv Detail & Related papers (2023-06-20T12:06:56Z) - Large-scale Pre-trained Models are Surprisingly Strong in Incremental Novel Class Discovery [76.63807209414789]
We challenge the status quo in class-iNCD and propose a learning paradigm where class discovery occurs continuously and truly unsupervisedly.
We propose simple baselines, composed of a frozen PTM backbone and a learnable linear classifier, that are not only simple to implement but also resilient under longer learning scenarios.
arXiv Detail & Related papers (2023-03-28T13:47:16Z) - Machine Learning with Probabilistic Law Discovery: A Concise
Introduction [77.34726150561087]
Probabilistic Law Discovery (PLD) is a logic based Machine Learning method, which implements a variant of probabilistic rule learning.
PLD is close to Decision Tree/Random Forest methods, but it differs significantly in how relevant rules are defined.
This paper outlines the main principles of PLD, highlight its benefits and limitations and provide some application guidelines.
arXiv Detail & Related papers (2022-12-22T17:40:13Z) - Concise and interpretable multi-label rule sets [13.416159628299779]
We develop a multi-label classifier that can be represented as a concise set of simple "if-then" rules.
Our method is able to find a small set of relevant patterns that lead to accurate multi-label classification.
arXiv Detail & Related papers (2022-10-04T11:23:50Z) - Truly Unordered Probabilistic Rule Sets for Multi-class Classification [0.0]
We propose TURS, for Truly Unordered Rule Sets.
We first formalise the problem of learning truly unordered rule sets.
We then develop a two-phase algorithm that learns rule sets by carefully growing rules.
arXiv Detail & Related papers (2022-06-17T14:34:35Z) - A Low Rank Promoting Prior for Unsupervised Contrastive Learning [108.91406719395417]
We construct a novel probabilistic graphical model that effectively incorporates the low rank promoting prior into the framework of contrastive learning.
Our hypothesis explicitly requires that all the samples belonging to the same instance class lie on the same subspace with small dimension.
Empirical evidences show that the proposed algorithm clearly surpasses the state-of-the-art approaches on multiple benchmarks.
arXiv Detail & Related papers (2021-08-05T15:58:25Z) - Fair Decision Rules for Binary Classification [0.0]
We consider the problem of building Boolean rule sets in disjunctive normal form (DNF)
We formulate the problem as an integer program that maximizes classification accuracy with explicit constraints on two different measures of classification parity.
Compared to other fair and interpretable classifiers, our method is able to find rule sets that meet stricter notions of fairness with a modest trade-off in accuracy.
arXiv Detail & Related papers (2021-07-03T02:32:17Z) - Diverse Adversaries for Mitigating Bias in Training [58.201275105195485]
We propose a novel approach to adversarial learning based on the use of multiple diverse discriminators.
Experimental results show that our method substantially improves over standard adversarial removal methods.
arXiv Detail & Related papers (2021-01-25T10:35:13Z) - Diverse Rule Sets [20.170305081348328]
Rule-based systems are experiencing a renaissance owing to their intuitive if-then representation.
We propose a novel approach of inferring diverse rule sets, by optimizing small overlap among decision rules.
We then devise an efficient randomized algorithm, which samples rules that are highly discriminative and have small overlap.
arXiv Detail & Related papers (2020-06-17T14:15:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.