Optimising the attribute order in Fuzzy Rough Rule Induction
- URL: http://arxiv.org/abs/2506.02805v1
- Date: Tue, 03 Jun 2025 12:34:40 GMT
- Title: Optimising the attribute order in Fuzzy Rough Rule Induction
- Authors: Henri Bollaert, Chris Cornelis, Marko Palangetić, Salvatore Greco, Roman Słowiński,
- Abstract summary: In our previous work, we introduced FRRI, a novel rule induction algorithm based on fuzzy rough set theory.<n>We demonstrated experimentally that FRRI outperformed other rule induction methods with regards to accuracy and number of rules.<n>In this paper, we show that optimising only the order of attributes using known methods does not improve the performance of FRRI on multiple metrics.
- Score: 0.8575004906002217
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Interpretability is the next pivotal frontier in machine learning research. In the pursuit of glass box models - as opposed to black box models, like random forests or neural networks - rule induction algorithms are a logical and promising avenue, as the rules can easily be understood by humans. In our previous work, we introduced FRRI, a novel rule induction algorithm based on fuzzy rough set theory. We demonstrated experimentally that FRRI outperformed other rule induction methods with regards to accuracy and number of rules. FRRI leverages a fuzzy indiscernibility relation to partition the data space into fuzzy granules, which are then combined into a minimal covering set of rules. This indiscernibility relation is constructed by removing attributes from rules in a greedy way. This raises the question: does the order of the attributes matter? In this paper, we show that optimising only the order of attributes using known methods from fuzzy rough set theory and classical machine learning does not improve the performance of FRRI on multiple metrics. However, removing a small number of attributes using fuzzy rough feature selection during this step positively affects balanced accuracy and the average rule length.
Related papers
- Crisp complexity of fuzzy classifiers [0.7874708385247353]
We study different possible crisp descriptions and implement an algorithm to obtain them.<n>Our results can help both fuzzy and non-fuzzy practitioners understand better the way in which fuzzy rule bases partition the feature space.
arXiv Detail & Related papers (2025-04-22T11:06:25Z) - Neuro-Symbolic Rule Lists [31.085257698392354]
NeuRules is an end-to-end trainable model that unifies discretization, rule learning, and rule order into a single framework.
We show that NeuRules consistently outperforms neuro-symbolic methods, effectively learning simple and complex rules, as well as their order, across a wide range of datasets.
arXiv Detail & Related papers (2024-11-10T11:10:36Z) - Faithful Differentiable Reasoning with Reshuffled Region-based Embeddings [62.93577376960498]
Knowledge graph embedding methods learn geometric representations of entities and relations to predict plausible missing knowledge.<n>We propose RESHUFFLE, a model based on ordering constraints that can faithfully capture a much larger class of rule bases.<n>The entity embeddings in our framework can be learned by a Graph Neural Network (GNN), which effectively acts as a differentiable rule base.
arXiv Detail & Related papers (2024-06-13T18:37:24Z) - Neuro-Symbolic Temporal Point Processes [13.72758658973969]
We introduce a neural-symbolic rule induction framework within the temporal point process model.
The negative log-likelihood is the loss that guides the learning, where the explanatory logic rules and their weights are learned end-to-end.
Our approach showcases notable efficiency and accuracy across synthetic and real datasets.
arXiv Detail & Related papers (2024-06-06T09:52:56Z) - FRRI: a novel algorithm for fuzzy-rough rule induction [0.8575004906002217]
We introduce a novel rule induction algorithm called Fuzzy Rough Rule Induction (FRRI)
We provide background and explain the workings of our algorithm.
We find that our algorithm is more accurate while creating small rulesets.
arXiv Detail & Related papers (2024-03-07T12:34:03Z) - Implicitly normalized forecaster with clipping for linear and non-linear
heavy-tailed multi-armed bandits [85.27420062094086]
Implicitly Normalized Forecaster (INF) is considered an optimal solution for adversarial multi-armed bandit (MAB) problems.
We propose a new version of INF called the Implicitly Normalized Forecaster with clipping (INFclip) for MAB problems with heavy-tailed settings.
We demonstrate that INFclip is optimal for linear heavy-tailed MAB problems and works well for non-linear ones.
arXiv Detail & Related papers (2023-05-11T12:00:43Z) - Machine Learning with Probabilistic Law Discovery: A Concise
Introduction [77.34726150561087]
Probabilistic Law Discovery (PLD) is a logic based Machine Learning method, which implements a variant of probabilistic rule learning.
PLD is close to Decision Tree/Random Forest methods, but it differs significantly in how relevant rules are defined.
This paper outlines the main principles of PLD, highlight its benefits and limitations and provide some application guidelines.
arXiv Detail & Related papers (2022-12-22T17:40:13Z) - ELM: Embedding and Logit Margins for Long-Tail Learning [70.19006872113862]
Long-tail learning is the problem of learning under skewed label distributions.
We present Embedding and Logit Margins (ELM), a unified approach to enforce margins in logit space.
The ELM method is shown to perform well empirically, and results in tighter tail class embeddings.
arXiv Detail & Related papers (2022-04-27T21:53:50Z) - Choquet-Based Fuzzy Rough Sets [2.4063592468412276]
Fuzzy rough set theory can be used as a tool for dealing with inconsistent data when there is a gradual notion of indiscernibility between objects.
To mitigate this problem, ordered weighted average (OWA) based fuzzy rough sets were introduced.
We show how the OWA-based approach can be interpreted intuitively in terms of vague quantification, and then generalize it to Choquet-based fuzzy rough sets.
arXiv Detail & Related papers (2022-02-22T13:10:16Z) - LNN-EL: A Neuro-Symbolic Approach to Short-text Entity Linking [62.634516517844496]
We propose LNN-EL, a neuro-symbolic approach that combines the advantages of using interpretable rules with the performance of neural learning.
Even though constrained to using rules, LNN-EL performs competitively against SotA black-box neural approaches.
arXiv Detail & Related papers (2021-06-17T20:22:45Z) - RNNLogic: Learning Logic Rules for Reasoning on Knowledge Graphs [91.71504177786792]
This paper studies learning logic rules for reasoning on knowledge graphs.
Logic rules provide interpretable explanations when used for prediction as well as being able to generalize to other tasks.
Existing methods either suffer from the problem of searching in a large search space or ineffective optimization due to sparse rewards.
arXiv Detail & Related papers (2020-10-08T14:47:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.