FRRI: a novel algorithm for fuzzy-rough rule induction
- URL: http://arxiv.org/abs/2403.04447v2
- Date: Thu, 29 Aug 2024 11:28:10 GMT
- Title: FRRI: a novel algorithm for fuzzy-rough rule induction
- Authors: Henri Bollaert, Marko Palangetić, Chris Cornelis, Salvatore Greco, Roman Słowiński,
- Abstract summary: We introduce a novel rule induction algorithm called Fuzzy Rough Rule Induction (FRRI)
We provide background and explain the workings of our algorithm.
We find that our algorithm is more accurate while creating small rulesets.
- Score: 0.8575004906002217
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Interpretability is the next frontier in machine learning research. In the search for white box models - as opposed to black box models, like random forests or neural networks - rule induction algorithms are a logical and promising option, since the rules can easily be understood by humans. Fuzzy and rough set theory have been successfully applied to this archetype, almost always separately. As both approaches to rule induction involve granular computing based on the concept of equivalence classes, it is natural to combine them. The QuickRules\cite{JensenCornelis2009} algorithm was a first attempt at using fuzzy rough set theory for rule induction. It is based on QuickReduct, a greedy algorithm for building decision reducts. QuickRules already showed an improvement over other rule induction methods. However, to evaluate the full potential of a fuzzy rough rule induction algorithm, one needs to start from the foundations. In this paper, we introduce a novel rule induction algorithm called Fuzzy Rough Rule Induction (FRRI). We provide background and explain the workings of our algorithm. Furthermore, we perform a computational experiment to evaluate the performance of our algorithm and compare it to other state-of-the-art rule induction approaches. We find that our algorithm is more accurate while creating small rulesets consisting of relatively short rules. We end the paper by outlining some directions for future work.
Related papers
- Bregman-divergence-based Arimoto-Blahut algorithm [53.64687146666141]
We generalize the Arimoto-Blahut algorithm to a general function defined over Bregman-divergence system.
We propose a convex-optimization-free algorithm that can be applied to classical and quantum rate-distortion theory.
arXiv Detail & Related papers (2024-08-10T06:16:24Z) - Learning the Positions in CountSketch [49.57951567374372]
We consider sketching algorithms which first compress data by multiplication with a random sketch matrix, and then apply the sketch to quickly solve an optimization problem.
In this work, we propose the first learning-based algorithms that also optimize the locations of the non-zero entries.
arXiv Detail & Related papers (2023-06-11T07:28:35Z) - The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - Efficient learning of large sets of locally optimal classification rules [0.0]
Conventional rule learning algorithms aim at finding a set of simple rules, where each rule covers as many examples as possible.
In this paper, we argue that the rules found in this way may not be the optimal explanations for each of the examples they cover.
We propose an efficient algorithm that aims at finding the best rule covering each training example in a greedy optimization consisting of one specialization and one generalization loop.
arXiv Detail & Related papers (2023-01-24T11:40:28Z) - Machine Learning with Probabilistic Law Discovery: A Concise
Introduction [77.34726150561087]
Probabilistic Law Discovery (PLD) is a logic based Machine Learning method, which implements a variant of probabilistic rule learning.
PLD is close to Decision Tree/Random Forest methods, but it differs significantly in how relevant rules are defined.
This paper outlines the main principles of PLD, highlight its benefits and limitations and provide some application guidelines.
arXiv Detail & Related papers (2022-12-22T17:40:13Z) - The no-free-lunch theorems of supervised learning [0.0]
The no-free-lunch theorems promote a skeptical conclusion that all possible machine learning algorithms equally lack justification.
We argue that many standard learning algorithms should rather be understood as model-dependent.
arXiv Detail & Related papers (2022-02-09T15:24:30Z) - A Clustering and Demotion Based Algorithm for Inductive Learning of
Default Theories [4.640835690336653]
We present a clustering- and demotion-based algorithm called Kmeans-FOLD to induce nonmonotonic logic programs from positive and negative examples.
Our algorithm generates a more concise logic program compared to the FOLD algorithm.
Experiments on the UCI dataset show that a combination of K-Means clustering and our demotion strategy produces significant improvement for datasets with more than one cluster of positive examples.
arXiv Detail & Related papers (2021-09-26T14:50:18Z) - The Bayesian Learning Rule [14.141964578853262]
We show that many machine-learning algorithms are specific instances of a single algorithm called the emphBayesian learning rule
The rule, derived from Bayesian principles, yields a wide-range of algorithms from fields such as optimization, deep learning, and graphical models.
arXiv Detail & Related papers (2021-07-09T17:28:55Z) - The Implications of the No-Free-Lunch Theorems for Meta-induction [1.4431534196506413]
The no-free-lunch theorems (NFL) have major implications for the problem of (meta) induction.
There is a rich set of free lunches which instead concern the statistical correlations among the generalization errors of induction algorithms.
The prior that Schurz advocates, which is uniform over bit frequencies rather than bit patterns, is contradicted by thousands of experiments in statistical physics.
arXiv Detail & Related papers (2021-03-22T15:53:00Z) - RNNLogic: Learning Logic Rules for Reasoning on Knowledge Graphs [91.71504177786792]
This paper studies learning logic rules for reasoning on knowledge graphs.
Logic rules provide interpretable explanations when used for prediction as well as being able to generalize to other tasks.
Existing methods either suffer from the problem of searching in a large search space or ineffective optimization due to sparse rewards.
arXiv Detail & Related papers (2020-10-08T14:47:02Z) - Lagrangian Decomposition for Neural Network Verification [148.0448557991349]
A fundamental component of neural network verification is the computation of bounds on the values their outputs can take.
We propose a novel approach based on Lagrangian Decomposition.
We show that we obtain bounds comparable with off-the-shelf solvers in a fraction of their running time.
arXiv Detail & Related papers (2020-02-24T17:55:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.