Crisp complexity of fuzzy classifiers
- URL: http://arxiv.org/abs/2504.15791v1
- Date: Tue, 22 Apr 2025 11:06:25 GMT
- Title: Crisp complexity of fuzzy classifiers
- Authors: Raquel Fernandez-Peralta, Javier Fumanal-Idocin, Javier Andreu-Perez,
- Abstract summary: We study different possible crisp descriptions and implement an algorithm to obtain them.<n>Our results can help both fuzzy and non-fuzzy practitioners understand better the way in which fuzzy rule bases partition the feature space.
- Score: 0.7874708385247353
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Rule-based systems are a very popular form of explainable AI, particularly in the fuzzy community, where fuzzy rules are widely used for control and classification problems. However, fuzzy rule-based classifiers struggle to reach bigger traction outside of fuzzy venues, because users sometimes do not know about fuzzy and because fuzzy partitions are not so easy to interpret in some situations. In this work, we propose a methodology to reduce fuzzy rule-based classifiers to crisp rule-based classifiers. We study different possible crisp descriptions and implement an algorithm to obtain them. Also, we analyze the complexity of the resulting crisp classifiers. We believe that our results can help both fuzzy and non-fuzzy practitioners understand better the way in which fuzzy rule bases partition the feature space and how easily one system can be translated to another and vice versa. Our complexity metric can also help to choose between different fuzzy classifiers based on what the equivalent crisp partitions look like.
Related papers
- Studying Classifier(-Free) Guidance From a Classifier-Centric Perspective [100.54185280153753]
We find that both classifier guidance and classifier-free guidance achieve conditional generation by pushing the denoising diffusion trajectories away from decision boundaries.<n>We propose a generic postprocessing step built upon flow-matching to shrink the gap between the learned distribution for a pretrained denoising diffusion model and the real data distribution.
arXiv Detail & Related papers (2025-03-13T17:59:59Z) - Compact Rule-Based Classifier Learning via Gradient Descent [0.7874708385247353]
Rule-based models play a crucial role in scenarios that require transparency and accountable decision-making.<n>We introduce a new rule-based classifier trained using gradient descent, in which the user can control the maximum number and length of the rules.<n>For numerical partitions, the user can also control the partitions used with fuzzy sets, which also helps keep the number of partitions small.
arXiv Detail & Related papers (2025-02-03T14:13:39Z) - Bidirectional Logits Tree: Pursuing Granularity Reconcilement in Fine-Grained Classification [89.20477310885731]
This paper addresses the challenge of Granularity Competition in fine-grained classification tasks.<n>Existing approaches typically develop independent hierarchy-aware models based on shared features extracted from a common base encoder.<n>We propose a novel framework called the Bidirectional Logits Tree (BiLT) for Granularity Reconcilement.
arXiv Detail & Related papers (2024-12-17T10:42:19Z) - RuleExplorer: A Scalable Matrix Visualization for Understanding Tree Ensemble Classifiers [20.416696003269674]
This paper introduces a scalable visual analysis method to explain tree ensemble classifiers that contain tens of thousands of rules.<n>We develop an anomaly-biased model reduction method to prioritize these rules at each hierarchical level.<n>Our method fosters a deeper understanding of both common and anomalous rules, thereby enhancing interpretability without sacrificing comprehensiveness.
arXiv Detail & Related papers (2024-09-05T01:48:11Z) - Understanding and Mitigating Classification Errors Through Interpretable
Token Patterns [58.91023283103762]
Characterizing errors in easily interpretable terms gives insight into whether a classifier is prone to making systematic errors.
We propose to discover those patterns of tokens that distinguish correct and erroneous predictions.
We show that our method, Premise, performs well in practice.
arXiv Detail & Related papers (2023-11-18T00:24:26Z) - Efficient learning of large sets of locally optimal classification rules [0.0]
Conventional rule learning algorithms aim at finding a set of simple rules, where each rule covers as many examples as possible.
In this paper, we argue that the rules found in this way may not be the optimal explanations for each of the examples they cover.
We propose an efficient algorithm that aims at finding the best rule covering each training example in a greedy optimization consisting of one specialization and one generalization loop.
arXiv Detail & Related papers (2023-01-24T11:40:28Z) - On the Complexity of Representation Learning in Contextual Linear
Bandits [110.84649234726442]
We show that representation learning is fundamentally more complex than linear bandits.
In particular, learning with a given set of representations is never simpler than learning with the worst realizable representation in the set.
arXiv Detail & Related papers (2022-12-19T13:08:58Z) - Concise and interpretable multi-label rule sets [13.416159628299779]
We develop a multi-label classifier that can be represented as a concise set of simple "if-then" rules.
Our method is able to find a small set of relevant patterns that lead to accurate multi-label classification.
arXiv Detail & Related papers (2022-10-04T11:23:50Z) - Bayes Point Rule Set Learning [5.065947993017157]
Interpretability is having an increasingly important role in the design of machine learning algorithms.
Disjunctive Normal Forms are arguably the most interpretable way to express a set of rules.
We propose an effective bottom-up extension of the popular FIND-S algorithm to learn DNF-type rulesets.
arXiv Detail & Related papers (2022-04-11T16:50:41Z) - Theoretical Insights Into Multiclass Classification: A High-dimensional
Asymptotic View [82.80085730891126]
We provide the first modernally precise analysis of linear multiclass classification.
Our analysis reveals that the classification accuracy is highly distribution-dependent.
The insights gained may pave the way for a precise understanding of other classification algorithms.
arXiv Detail & Related papers (2020-11-16T05:17:29Z) - Optimal Clustering from Noisy Binary Feedback [75.17453757892152]
We study the problem of clustering a set of items from binary user feedback.
We devise an algorithm with a minimal cluster recovery error rate.
For adaptive selection, we develop an algorithm inspired by the derivation of the information-theoretical error lower bounds.
arXiv Detail & Related papers (2019-10-14T09:18:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.