On learning capacities of Sugeno integrals with systems of fuzzy relational equations
- URL: http://arxiv.org/abs/2408.07768v1
- Date: Wed, 14 Aug 2024 18:40:01 GMT
- Title: On learning capacities of Sugeno integrals with systems of fuzzy relational equations
- Authors: Ismaïl Baaj,
- Abstract summary: We introduce a method for learning a capacity underlying a Sugeno integral according to training data based on systems of fuzzy relational equations.
We show how to obtain the greatest approximate $q$-maxitive capacity and the lowest approximate $q$-minitive capacity, using recent results to handle the inconsistency of systems of fuzzy relational equations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this article, we introduce a method for learning a capacity underlying a Sugeno integral according to training data based on systems of fuzzy relational equations. To the training data, we associate two systems of equations: a $\max-\min$ system and a $\min-\max$ system. By solving these two systems (in the case that they are consistent) using Sanchez's results, we show that we can directly obtain the extremal capacities representing the training data. By reducing the $\max-\min$ (resp. $\min-\max$) system of equations to subsets of criteria of cardinality less than or equal to $q$ (resp. of cardinality greater than or equal to $n-q$), where $n$ is the number of criteria, we give a sufficient condition for deducing, from its potential greatest solution (resp. its potential lowest solution), a $q$-maxitive (resp. $q$-minitive) capacity. Finally, if these two reduced systems of equations are inconsistent, we show how to obtain the greatest approximate $q$-maxitive capacity and the lowest approximate $q$-minitive capacity, using recent results to handle the inconsistency of systems of fuzzy relational equations.
Related papers
- Inverse Entropic Optimal Transport Solves Semi-supervised Learning via Data Likelihood Maximization [65.8915778873691]
conditional distributions is a central problem in machine learning.
We propose a new learning paradigm that integrates both paired and unpaired data.
Our approach also connects intriguingly with inverse entropic optimal transport (OT)
arXiv Detail & Related papers (2024-10-03T16:12:59Z) - Learning sum of diverse features: computational hardness and efficient gradient-based training for ridge combinations [40.77319247558742]
We study the computational complexity of learning a target function $f_*:mathbbRdtomathbbR$ with additive structure.
We prove that a large subset of $f_*$ can be efficiently learned by gradient training of a two-layer neural network.
arXiv Detail & Related papers (2024-06-17T17:59:17Z) - Maximal Consistent Subsystems of Max-T Fuzzy Relational Equations [0.0]
We study the inconsistency of a system of $max-T$ fuzzy relational equations of the form $A Box_Tmax x = b$, where $T$ is a t-norm among $min$, the product or Lukasiewicz's t-norm.
For an inconsistent $max-T$ system, we construct a canonical maximal consistent subsystem.
We show how to iteratively get all its maximal consistent subsystems.
arXiv Detail & Related papers (2023-11-06T12:41:21Z) - Learning to Relax: Setting Solver Parameters Across a Sequence of Linear System Instances [42.16343861129583]
We show that a bandit online learning algorithm can select parameters for a sequence of instances such that the overall cost approaches that of the best fixed $omega$ as the sequence length increases.
Our work provides the first learning-theoretic treatment of high-precision linear system solvers and the first end-to-end guarantees for data-driven scientific computing.
arXiv Detail & Related papers (2023-10-03T17:51:42Z) - Chebyshev distances associated to the second members of systems of
Max-product/Lukasiewicz Fuzzy relational equations [0.0]
We study the inconsistency of a system of $max$-product fuzzy relational equations and of a system of $max$-Lukasiewicz fuzzy relational equations.
We compute the Chebyshev distance associated to the second member of a system of $max$-product fuzzy relational equations and that associated to the second member of a system of $max$-Lukasiewicz relational fuzzy equations.
arXiv Detail & Related papers (2023-01-30T09:18:20Z) - Underdetermined Dyson-Schwinger equations [0.0]
The paper examines the effectiveness of the Dyson-Schwinger equations as a calculational tool in quantum field theory.
The truncated DS equations give a sequence of approximants that converge slowly to a limiting value.
More sophisticated truncation schemes based on mean-field-like approximations do not fix this formidable calculational problem.
arXiv Detail & Related papers (2022-11-23T15:28:34Z) - Quantum Resources Required to Block-Encode a Matrix of Classical Data [56.508135743727934]
We provide circuit-level implementations and resource estimates for several methods of block-encoding a dense $Ntimes N$ matrix of classical data to precision $epsilon$.
We examine resource tradeoffs between the different approaches and explore implementations of two separate models of quantum random access memory (QRAM)
Our results go beyond simple query complexity and provide a clear picture into the resource costs when large amounts of classical data are assumed to be accessible to quantum algorithms.
arXiv Detail & Related papers (2022-06-07T18:00:01Z) - Minimax Optimal Quantization of Linear Models: Information-Theoretic
Limits and Efficient Algorithms [59.724977092582535]
We consider the problem of quantizing a linear model learned from measurements.
We derive an information-theoretic lower bound for the minimax risk under this setting.
We show that our method and upper-bounds can be extended for two-layer ReLU neural networks.
arXiv Detail & Related papers (2022-02-23T02:39:04Z) - Robustly Learning any Clusterable Mixture of Gaussians [55.41573600814391]
We study the efficient learnability of high-dimensional Gaussian mixtures in the adversarial-robust setting.
We provide an algorithm that learns the components of an $epsilon$-corrupted $k$-mixture within information theoretically near-optimal error proofs of $tildeO(epsilon)$.
Our main technical contribution is a new robust identifiability proof clusters from a Gaussian mixture, which can be captured by the constant-degree Sum of Squares proof system.
arXiv Detail & Related papers (2020-05-13T16:44:12Z) - Agnostic Q-learning with Function Approximation in Deterministic
Systems: Tight Bounds on Approximation Error and Sample Complexity [94.37110094442136]
We study the problem of agnostic $Q$-learning with function approximation in deterministic systems.
We show that if $delta = Oleft(rho/sqrtdim_Eright)$, then one can find the optimal policy using $Oleft(dim_Eright)$.
arXiv Detail & Related papers (2020-02-17T18:41:49Z) - Nonconvex Zeroth-Order Stochastic ADMM Methods with Lower Function Query
Complexity [109.54166127479093]
Zeroth-order (a.k.a, derivative-free) methods are a class of effective optimization methods for solving machine learning problems.
In this paper, we propose a class faster faster zerothorder alternating gradient method multipliers (MMADMM) to solve the non finitesum problems.
We show that ZOMMAD methods can achieve a lower function $O(frac13nfrac1)$ for finding an $epsilon$-stationary point.
At the same time, we propose a class faster zerothorder online ADM methods (M
arXiv Detail & Related papers (2019-07-30T02:21:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.