$(O,G)$-granular variable precision fuzzy rough sets based on overlap
and grouping functions
- URL: http://arxiv.org/abs/2205.08719v1
- Date: Wed, 18 May 2022 04:37:15 GMT
- Title: $(O,G)$-granular variable precision fuzzy rough sets based on overlap
and grouping functions
- Authors: Wei Li, Bin Yang, Junsheng Qiao
- Abstract summary: In this paper, the depiction of $(O,G)$-granular variable precision fuzzy rough sets ($(O,G)$-GVPFRSs for short) is first given based on overlap and grouping functions.
To work out the approximation operators efficiently, we give another expression of upper and lower approximation operators by means of fuzzy implications.
Some conclusions on the granular variable precision fuzzy rough sets (GVPFRSs for short) are extended to $(O,G)$-GVPFRSs under some additional conditions.
- Score: 16.843434476423305
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Since Bustince et al. introduced the concepts of overlap and grouping
functions, these two types of aggregation functions have attracted a lot of
interest in both theory and applications. In this paper, the depiction of
$(O,G)$-granular variable precision fuzzy rough sets ($(O,G)$-GVPFRSs for
short) is first given based on overlap and grouping functions. Meanwhile, to
work out the approximation operators efficiently, we give another expression of
upper and lower approximation operators by means of fuzzy implications and
co-implications. Furthermore, starting from the perspective of construction
methods, $(O,G)$-GVPFRSs are represented under diverse fuzzy relations.
Finally, some conclusions on the granular variable precision fuzzy rough sets
(GVPFRSs for short) are extended to $(O,G)$-GVPFRSs under some additional
conditions.
Related papers
- A General Framework for Robust G-Invariance in G-Equivariant Networks [5.227502964814928]
We introduce a general method for achieving robust group-invariance in group-equivariant convolutional neural networks ($G$-CNNs)
The completeness of the triple correlation endows the $G$-TC layer with strong robustness.
We demonstrate the benefits of this method on both commutative and non-commutative groups.
arXiv Detail & Related papers (2023-10-28T02:27:34Z) - On Convergence of Incremental Gradient for Non-Convex Smooth Functions [63.51187646914962]
In machine learning and network optimization, algorithms like shuffle SGD are popular due to minimizing the number of misses and good cache.
This paper delves into the convergence properties SGD algorithms with arbitrary data ordering.
arXiv Detail & Related papers (2023-05-30T17:47:27Z) - $(\alpha_D,\alpha_G)$-GANs: Addressing GAN Training Instabilities via
Dual Objectives [7.493779672689531]
We introduce a class of dual-objective GANs with different value functions (objectives) for the generator (G) and discriminator (D)
We show that the resulting non-zero sum game simplifies to minimize an $f$-divergence under appropriate conditions on $(alpha_D,alpha_G)$.
We highlight the value of tuning $(alpha_D,alpha_G)$ in alleviating training instabilities for the synthetic 2D Gaussian mixture ring and the Stacked MNIST datasets.
arXiv Detail & Related papers (2023-02-28T05:22:54Z) - Sharper Rates and Flexible Framework for Nonconvex SGD with Client and
Data Sampling [64.31011847952006]
We revisit the problem of finding an approximately stationary point of the average of $n$ smooth and possibly non-color functions.
We generalize the $smallsfcolorgreen$ so that it can provably work with virtually any sampling mechanism.
We provide the most general and most accurate analysis of optimal bound in the smooth non-color regime.
arXiv Detail & Related papers (2022-06-05T21:32:33Z) - On three types of $L$-fuzzy $\beta$-covering-based rough sets [16.843434476423305]
We study the axiom sets, matrix representations and interdependency of three pairs of $L$-fuzzy $beta$-covering-based rough approximation operators.
We present the necessary and sufficient conditions under which two $L$-fuzzy $beta$-coverings can generate the same lower and upper rough approximation operations.
arXiv Detail & Related papers (2022-05-13T05:30:51Z) - Some neighborhood-related fuzzy covering-based rough set models and
their applications for decision making [8.270779659551431]
We propose four types of fuzzy neighborhood operators based on fuzzy covering by overlap functions and their implicators.
A novel fuzzy TOPSIS methodology is put forward to solve a biosynthetic nanomaterials select issue.
arXiv Detail & Related papers (2022-05-13T05:02:53Z) - Submodular + Concave [53.208470310734825]
It has been well established that first order optimization methods can converge to the maximal objective value of concave functions.
In this work, we initiate the determinant of the smooth functions convex body $$F(x) = G(x) +C(x)$.
This class of functions is an extension of both concave and continuous DR-submodular functions for which no guarantee is known.
arXiv Detail & Related papers (2021-06-09T01:59:55Z) - Learning Aggregation Functions [78.47770735205134]
We introduce LAF (Learning Aggregation Functions), a learnable aggregator for sets of arbitrary cardinality.
We report experiments on semi-synthetic and real data showing that LAF outperforms state-of-the-art sum- (max-) decomposition architectures.
arXiv Detail & Related papers (2020-12-15T18:28:53Z) - On the finite representation of group equivariant operators via
permutant measures [0.0]
We show that each linear $G$-equivariant operator can be produced by a suitable permutant measure.
This result makes available a new method to build linear $G$-equivariant operators in the finite setting.
arXiv Detail & Related papers (2020-08-07T14:25:04Z) - A deep network construction that adapts to intrinsic dimensionality
beyond the domain [79.23797234241471]
We study the approximation of two-layer compositions $f(x) = g(phi(x))$ via deep networks with ReLU activation.
We focus on two intuitive and practically relevant choices for $phi$: the projection onto a low-dimensional embedded submanifold and a distance to a collection of low-dimensional sets.
arXiv Detail & Related papers (2020-08-06T09:50:29Z) - Reinforcement Learning with General Value Function Approximation:
Provably Efficient Approach via Bounded Eluder Dimension [124.7752517531109]
We establish a provably efficient reinforcement learning algorithm with general value function approximation.
We show that our algorithm achieves a regret bound of $widetildeO(mathrmpoly(dH)sqrtT)$ where $d$ is a complexity measure.
Our theory generalizes recent progress on RL with linear value function approximation and does not make explicit assumptions on the model of the environment.
arXiv Detail & Related papers (2020-05-21T17:36:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.