Fuzzy Rough Sets Based on Fuzzy Quantification
- URL: http://arxiv.org/abs/2212.04327v1
- Date: Tue, 6 Dec 2022 19:59:57 GMT
- Title: Fuzzy Rough Sets Based on Fuzzy Quantification
- Authors: Adnan Theerens and Chris Cornelis
- Abstract summary: We introduce fuzzy quantifier-based fuzzy rough sets (FQFRS)
FQFRS is an intuitive generalization of fuzzy rough sets.
We show how several existing models fit in this generalization as well as how it inspires novel ones.
- Score: 1.4213973379473654
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: One of the weaknesses of classical (fuzzy) rough sets is their sensitivity to
noise, which is particularly undesirable for machine learning applications. One
approach to solve this issue is by making use of fuzzy quantifiers, as done by
the vaguely quantified fuzzy rough set (VQFRS) model. While this idea is
intuitive, the VQFRS model suffers from both theoretical flaws as well as from
suboptimal performance in applications. In this paper, we improve on VQFRS by
introducing fuzzy quantifier-based fuzzy rough sets (FQFRS), an intuitive
generalization of fuzzy rough sets that makes use of general unary and binary
quantification models. We show how several existing models fit in this
generalization as well as how it inspires novel ones. Several binary
quantification models are proposed to be used with FQFRS. We conduct a
theoretical study of their properties, and investigate their potential by
applying them to classification problems. In particular, we highlight Yager's
Weighted Implication-based (YWI) binary quantification model, which induces a
fuzzy rough set model that is both a significant improvement on VQFRS, as well
as a worthy competitor to the popular ordered weighted averaging based fuzzy
rough set (OWAFRS) model.
Related papers
- Revisiting SMoE Language Models by Evaluating Inefficiencies with Task Specific Expert Pruning [78.72226641279863]
Sparse Mixture of Expert (SMoE) models have emerged as a scalable alternative to dense models in language modeling.
Our research explores task-specific model pruning to inform decisions about designing SMoE architectures.
We introduce an adaptive task-aware pruning technique UNCURL to reduce the number of experts per MoE layer in an offline manner post-training.
arXiv Detail & Related papers (2024-09-02T22:35:03Z) - Scaling and renormalization in high-dimensional regression [72.59731158970894]
This paper presents a succinct derivation of the training and generalization performance of a variety of high-dimensional ridge regression models.
We provide an introduction and review of recent results on these topics, aimed at readers with backgrounds in physics and deep learning.
arXiv Detail & Related papers (2024-05-01T15:59:00Z) - On the Granular Representation of Fuzzy Quantifier-Based Fuzzy Rough
Sets [0.7614628596146602]
This paper focuses on fuzzy quantifier-based fuzzy rough sets (FQFRS)
It shows that Choquet-based fuzzy rough sets can be represented granularly under the same conditions as OWA-based fuzzy rough sets.
This observation highlights the potential of these models for resolving data inconsistencies and managing noise.
arXiv Detail & Related papers (2023-12-27T20:02:40Z) - Random Models for Fuzzy Clustering Similarity Measures [0.0]
The Adjusted Rand Index (ARI) is a widely used method for comparing hard clusterings.
We propose a single framework for computing the ARI with three random models that are intuitive and explainable for both hard and fuzzy clusterings.
arXiv Detail & Related papers (2023-12-16T00:07:04Z) - Choquet-Based Fuzzy Rough Sets [2.4063592468412276]
Fuzzy rough set theory can be used as a tool for dealing with inconsistent data when there is a gradual notion of indiscernibility between objects.
To mitigate this problem, ordered weighted average (OWA) based fuzzy rough sets were introduced.
We show how the OWA-based approach can be interpreted intuitively in terms of vague quantification, and then generalize it to Choquet-based fuzzy rough sets.
arXiv Detail & Related papers (2022-02-22T13:10:16Z) - FairIF: Boosting Fairness in Deep Learning via Influence Functions with
Validation Set Sensitive Attributes [51.02407217197623]
We propose a two-stage training algorithm named FAIRIF.
It minimizes the loss over the reweighted data set where the sample weights are computed.
We show that FAIRIF yields models with better fairness-utility trade-offs against various types of bias.
arXiv Detail & Related papers (2022-01-15T05:14:48Z) - A Variational Inference Approach to Inverse Problems with Gamma
Hyperpriors [60.489902135153415]
This paper introduces a variational iterative alternating scheme for hierarchical inverse problems with gamma hyperpriors.
The proposed variational inference approach yields accurate reconstruction, provides meaningful uncertainty quantification, and is easy to implement.
arXiv Detail & Related papers (2021-11-26T06:33:29Z) - Bias-Variance Tradeoffs in Single-Sample Binary Gradient Estimators [100.58924375509659]
Straight-through (ST) estimator gained popularity due to its simplicity and efficiency.
Several techniques were proposed to improve over ST while keeping the same low computational complexity.
We conduct a theoretical analysis of Bias and Variance of these methods in order to understand tradeoffs and verify originally claimed properties.
arXiv Detail & Related papers (2021-10-07T15:16:07Z) - Robust Implicit Networks via Non-Euclidean Contractions [63.91638306025768]
Implicit neural networks show improved accuracy and significant reduction in memory consumption.
They can suffer from ill-posedness and convergence instability.
This paper provides a new framework to design well-posed and robust implicit neural networks.
arXiv Detail & Related papers (2021-06-06T18:05:02Z) - A Bayesian Approach with Type-2 Student-tMembership Function for T-S
Model Identification [47.25472624305589]
fuzzyc-regression clustering based on type-2 fuzzyset has been shown the remarkable results on non-sparse data.
Aninnovative architecture for fuzzyc-regression model is presented and a novel student-tdistribution based membership functionis designed for sparse data modelling.
arXiv Detail & Related papers (2020-09-02T05:10:13Z) - Interpretation and Simplification of Deep Forest [4.576379639081977]
We consider quantifying the feature contributions and frequency of the fully trained deep RF in the form of a decision rule set.
Model simplification is achieved by eliminating unnecessary rules by measuring the feature contributions.
Experiment results have shown that a feature contribution analysis allows a black box model to be decomposed for quantitatively interpreting a rule set.
arXiv Detail & Related papers (2020-01-14T11:30:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.