Choquet-Based Fuzzy Rough Sets
- URL: http://arxiv.org/abs/2202.10872v1
- Date: Tue, 22 Feb 2022 13:10:16 GMT
- Title: Choquet-Based Fuzzy Rough Sets
- Authors: Adnan Theerens, Oliver Urs Lenz, Chris Cornelis
- Abstract summary: Fuzzy rough set theory can be used as a tool for dealing with inconsistent data when there is a gradual notion of indiscernibility between objects.
To mitigate this problem, ordered weighted average (OWA) based fuzzy rough sets were introduced.
We show how the OWA-based approach can be interpreted intuitively in terms of vague quantification, and then generalize it to Choquet-based fuzzy rough sets.
- Score: 2.4063592468412276
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fuzzy rough set theory can be used as a tool for dealing with inconsistent
data when there is a gradual notion of indiscernibility between objects. It
does this by providing lower and upper approximations of concepts. In classical
fuzzy rough sets, the lower and upper approximations are determined using the
minimum and maximum operators, respectively. This is undesirable for machine
learning applications, since it makes these approximations sensitive to
outlying samples. To mitigate this problem, ordered weighted average (OWA)
based fuzzy rough sets were introduced. In this paper, we show how the
OWA-based approach can be interpreted intuitively in terms of vague
quantification, and then generalize it to Choquet-based fuzzy rough sets
(CFRS). This generalization maintains desirable theoretical properties, such as
duality and monotonicity. Furthermore, it provides more flexibility for machine
learning applications. In particular, we show that it enables the seamless
integration of outlier detection algorithms, to enhance the robustness of
machine learning algorithms based on fuzzy rough sets.
Related papers
- Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum [56.37522020675243]
We provide the first proof of convergence for normalized error feedback algorithms across a wide range of machine learning problems.
We show that due to their larger allowable stepsizes, our new normalized error feedback algorithms outperform their non-normalized counterparts on various tasks.
arXiv Detail & Related papers (2024-10-22T10:19:27Z) - An Adaptive Cost-Sensitive Learning and Recursive Denoising Framework for Imbalanced SVM Classification [12.986535715303331]
Category imbalance is one of the most popular and important issues in the domain of classification.
Emotion classification model trained on imbalanced datasets easily leads to unreliable prediction.
arXiv Detail & Related papers (2024-03-13T09:43:14Z) - Learning a Gaussian Mixture for Sparsity Regularization in Inverse
Problems [2.375943263571389]
In inverse problems, the incorporation of a sparsity prior yields a regularization effect on the solution.
We propose a probabilistic sparsity prior formulated as a mixture of Gaussians, capable of modeling sparsity with respect to a generic basis.
We put forth both a supervised and an unsupervised training strategy to estimate the parameters of this network.
arXiv Detail & Related papers (2024-01-29T22:52:57Z) - On the Granular Representation of Fuzzy Quantifier-Based Fuzzy Rough
Sets [0.7614628596146602]
This paper focuses on fuzzy quantifier-based fuzzy rough sets (FQFRS)
It shows that Choquet-based fuzzy rough sets can be represented granularly under the same conditions as OWA-based fuzzy rough sets.
This observation highlights the potential of these models for resolving data inconsistencies and managing noise.
arXiv Detail & Related papers (2023-12-27T20:02:40Z) - Fuzzy Rough Sets Based on Fuzzy Quantification [1.4213973379473654]
We introduce fuzzy quantifier-based fuzzy rough sets (FQFRS)
FQFRS is an intuitive generalization of fuzzy rough sets.
We show how several existing models fit in this generalization as well as how it inspires novel ones.
arXiv Detail & Related papers (2022-12-06T19:59:57Z) - Optimal Algorithms for Stochastic Complementary Composite Minimization [55.26935605535377]
Inspired by regularization techniques in statistics and machine learning, we study complementary composite minimization.
We provide novel excess risk bounds, both in expectation and with high probability.
Our algorithms are nearly optimal, which we prove via novel lower complexity bounds for this class of problems.
arXiv Detail & Related papers (2022-11-03T12:40:24Z) - CFARnet: deep learning for target detection with constant false alarm
rate [2.2940141855172036]
We introduce a framework of CFAR constrained detectors.
Practically, we develop a deep learning framework for fitting neural networks that approximate it.
Experiments of target detection in different setting demonstrate that the proposed CFARnet allows a flexible tradeoff between CFAR and accuracy.
arXiv Detail & Related papers (2022-08-04T05:54:36Z) - VAE Approximation Error: ELBO and Conditional Independence [78.72292013299868]
This paper analyzes VAE approximation errors caused by the combination of the ELBO objective with the choice of the encoder probability family.
We show that the ELBO subset can not be enlarged, and the respective error cannot be decreased, by only considering deeper encoder networks.
arXiv Detail & Related papers (2021-02-18T12:54:42Z) - Refined bounds for algorithm configuration: The knife-edge of dual class
approximability [94.83809668933021]
We investigate how large should a training set be to ensure that a parameter's average metrics performance over the training set is close to its expected, future performance.
We show that if this approximation holds under the L-infinity norm, we can provide strong sample complexity bounds.
We empirically evaluate our bounds in the context of integer programming, one of the most powerful tools in computer science.
arXiv Detail & Related papers (2020-06-21T15:32:21Z) - Consistency Regularization for Certified Robustness of Smoothed
Classifiers [89.72878906950208]
A recent technique of randomized smoothing has shown that the worst-case $ell$-robustness can be transformed into the average-case robustness.
We found that the trade-off between accuracy and certified robustness of smoothed classifiers can be greatly controlled by simply regularizing the prediction consistency over noise.
arXiv Detail & Related papers (2020-06-07T06:57:43Z) - An Information Bottleneck Approach for Controlling Conciseness in
Rationale Extraction [84.49035467829819]
We show that it is possible to better manage this trade-off by optimizing a bound on the Information Bottleneck (IB) objective.
Our fully unsupervised approach jointly learns an explainer that predicts sparse binary masks over sentences, and an end-task predictor that considers only the extracted rationale.
arXiv Detail & Related papers (2020-05-01T23:26:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.