Choquet-Based Fuzzy Rough Sets
- URL: http://arxiv.org/abs/2202.10872v1
- Date: Tue, 22 Feb 2022 13:10:16 GMT
- Title: Choquet-Based Fuzzy Rough Sets
- Authors: Adnan Theerens, Oliver Urs Lenz, Chris Cornelis
- Abstract summary: Fuzzy rough set theory can be used as a tool for dealing with inconsistent data when there is a gradual notion of indiscernibility between objects.
To mitigate this problem, ordered weighted average (OWA) based fuzzy rough sets were introduced.
We show how the OWA-based approach can be interpreted intuitively in terms of vague quantification, and then generalize it to Choquet-based fuzzy rough sets.
- Score: 2.4063592468412276
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fuzzy rough set theory can be used as a tool for dealing with inconsistent
data when there is a gradual notion of indiscernibility between objects. It
does this by providing lower and upper approximations of concepts. In classical
fuzzy rough sets, the lower and upper approximations are determined using the
minimum and maximum operators, respectively. This is undesirable for machine
learning applications, since it makes these approximations sensitive to
outlying samples. To mitigate this problem, ordered weighted average (OWA)
based fuzzy rough sets were introduced. In this paper, we show how the
OWA-based approach can be interpreted intuitively in terms of vague
quantification, and then generalize it to Choquet-based fuzzy rough sets
(CFRS). This generalization maintains desirable theoretical properties, such as
duality and monotonicity. Furthermore, it provides more flexibility for machine
learning applications. In particular, we show that it enables the seamless
integration of outlier detection algorithms, to enhance the robustness of
machine learning algorithms based on fuzzy rough sets.
Related papers
- SeWA: Selective Weight Average via Probabilistic Masking [51.015724517293236]
We show that only a few points are needed to achieve better and faster convergence.
We transform the discrete selection problem into a continuous subset optimization framework.
We derive the SeWA's stability bounds, which are sharper than that under both convex image checkpoints.
arXiv Detail & Related papers (2025-02-14T12:35:21Z) - Adaptive Sampled Softmax with Inverted Multi-Index: Methods, Theory and Applications [79.53938312089308]
The MIDX-Sampler is a novel adaptive sampling strategy based on an inverted multi-index approach.
Our method is backed by rigorous theoretical analysis, addressing key concerns such as sampling bias, gradient bias, convergence rates, and generalization error bounds.
arXiv Detail & Related papers (2025-01-15T04:09:21Z) - Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum [56.37522020675243]
We provide the first proof of convergence for normalized error feedback algorithms across a wide range of machine learning problems.
We show that due to their larger allowable stepsizes, our new normalized error feedback algorithms outperform their non-normalized counterparts on various tasks.
arXiv Detail & Related papers (2024-10-22T10:19:27Z) - An Adaptive Cost-Sensitive Learning and Recursive Denoising Framework for Imbalanced SVM Classification [12.986535715303331]
Category imbalance is one of the most popular and important issues in the domain of classification.
We propose a robust learning algorithm based on adaptive cost-sensitivity and recursion.
Experimental results show that the proposed general framework is superior to traditional methods in Accuracy, G-mean, Recall and F1-score.
arXiv Detail & Related papers (2024-03-13T09:43:14Z) - Learning a Gaussian Mixture for Sparsity Regularization in Inverse
Problems [2.375943263571389]
In inverse problems, the incorporation of a sparsity prior yields a regularization effect on the solution.
We propose a probabilistic sparsity prior formulated as a mixture of Gaussians, capable of modeling sparsity with respect to a generic basis.
We put forth both a supervised and an unsupervised training strategy to estimate the parameters of this network.
arXiv Detail & Related papers (2024-01-29T22:52:57Z) - On the Granular Representation of Fuzzy Quantifier-Based Fuzzy Rough
Sets [0.7614628596146602]
This paper focuses on fuzzy quantifier-based fuzzy rough sets (FQFRS)
It shows that Choquet-based fuzzy rough sets can be represented granularly under the same conditions as OWA-based fuzzy rough sets.
This observation highlights the potential of these models for resolving data inconsistencies and managing noise.
arXiv Detail & Related papers (2023-12-27T20:02:40Z) - Fuzzy Rough Sets Based on Fuzzy Quantification [1.4213973379473654]
We introduce fuzzy quantifier-based fuzzy rough sets (FQFRS)
FQFRS is an intuitive generalization of fuzzy rough sets.
We show how several existing models fit in this generalization as well as how it inspires novel ones.
arXiv Detail & Related papers (2022-12-06T19:59:57Z) - Optimal Algorithms for Stochastic Complementary Composite Minimization [55.26935605535377]
Inspired by regularization techniques in statistics and machine learning, we study complementary composite minimization.
We provide novel excess risk bounds, both in expectation and with high probability.
Our algorithms are nearly optimal, which we prove via novel lower complexity bounds for this class of problems.
arXiv Detail & Related papers (2022-11-03T12:40:24Z) - CFARnet: deep learning for target detection with constant false alarm
rate [2.2940141855172036]
We introduce a framework of CFAR constrained detectors.
Practically, we develop a deep learning framework for fitting neural networks that approximate it.
Experiments of target detection in different setting demonstrate that the proposed CFARnet allows a flexible tradeoff between CFAR and accuracy.
arXiv Detail & Related papers (2022-08-04T05:54:36Z) - VAE Approximation Error: ELBO and Conditional Independence [78.72292013299868]
This paper analyzes VAE approximation errors caused by the combination of the ELBO objective with the choice of the encoder probability family.
We show that the ELBO subset can not be enlarged, and the respective error cannot be decreased, by only considering deeper encoder networks.
arXiv Detail & Related papers (2021-02-18T12:54:42Z) - Consistency Regularization for Certified Robustness of Smoothed
Classifiers [89.72878906950208]
A recent technique of randomized smoothing has shown that the worst-case $ell$-robustness can be transformed into the average-case robustness.
We found that the trade-off between accuracy and certified robustness of smoothed classifiers can be greatly controlled by simply regularizing the prediction consistency over noise.
arXiv Detail & Related papers (2020-06-07T06:57:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.