Fuzzy Integral = Contextual Linear Order Statistic
- URL: http://arxiv.org/abs/2007.02874v2
- Date: Tue, 20 Oct 2020 18:45:49 GMT
- Title: Fuzzy Integral = Contextual Linear Order Statistic
- Authors: Derek Anderson, Matthew Deardorff, Timothy Havens, Siva Kakula,
Timothy Wilkin, Muhammad Islam, Anthony Pinar, and Andrew Buck
- Abstract summary: The fuzzy integral is a powerful parametric nonlin-ear function with utility in a wide range of applications.
We show that it can be represented by a set of contextual linear order statistics.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The fuzzy integral is a powerful parametric nonlin-ear function with utility
in a wide range of applications, from information fusion to classification,
regression, decision making,interpolation, metrics, morphology, and beyond.
While the fuzzy integral is in general a nonlinear operator, herein we show
that it can be represented by a set of contextual linear order statistics(LOS).
These operators can be obtained via sampling the fuzzy measure and clustering
is used to produce a partitioning of the underlying space of linear convex
sums. Benefits of our approach include scalability, improved integral/measure
acquisition, generalizability, and explainable/interpretable models. Our
methods are both demonstrated on controlled synthetic experiments, and also
analyzed and validated with real-world benchmark data sets.
Related papers
- Generalized Sparse Additive Model with Unknown Link Function [19.807823040041896]
We propose a new sparse additive model, named generalized sparse additive model with unknown link function (GSAMUL)
The component functions are estimated by B-spline basis and the unknown link function is estimated by a multi-layer perceptron (MLP) network.
In applications, experimental evaluations on both synthetic and real world data sets consistently validate the effectiveness of the proposed approach.
arXiv Detail & Related papers (2024-10-08T13:13:58Z) - Manifold Learning with Sparse Regularised Optimal Transport [0.17205106391379024]
Real-world datasets are subject to noisy observations and sampling, so that distilling information about the underlying manifold is a major challenge.
We propose a method for manifold learning that utilises a symmetric version of optimal transport with a quadratic regularisation.
We prove that the resulting kernel is consistent with a Laplace-type operator in the continuous limit, establish robustness to heteroskedastic noise and exhibit these results in simulations.
arXiv Detail & Related papers (2023-07-19T08:05:46Z) - Reinforcement Learning from Partial Observation: Linear Function Approximation with Provable Sample Efficiency [111.83670279016599]
We study reinforcement learning for partially observed decision processes (POMDPs) with infinite observation and state spaces.
We make the first attempt at partial observability and function approximation for a class of POMDPs with a linear structure.
arXiv Detail & Related papers (2022-04-20T21:15:38Z) - Test Set Sizing Via Random Matrix Theory [91.3755431537592]
This paper uses techniques from Random Matrix Theory to find the ideal training-testing data split for a simple linear regression.
It defines "ideal" as satisfying the integrity metric, i.e. the empirical model error is the actual measurement noise.
This paper is the first to solve for the training and test size for any model in a way that is truly optimal.
arXiv Detail & Related papers (2021-12-11T13:18:33Z) - Efficient Multidimensional Functional Data Analysis Using Marginal
Product Basis Systems [2.4554686192257424]
We propose a framework for learning continuous representations from a sample of multidimensional functional data.
We show that the resulting estimation problem can be solved efficiently by the tensor decomposition.
We conclude with a real data application in neuroimaging.
arXiv Detail & Related papers (2021-07-30T16:02:15Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Random extrapolation for primal-dual coordinate descent [61.55967255151027]
We introduce a randomly extrapolated primal-dual coordinate descent method that adapts to sparsity of the data matrix and the favorable structures of the objective function.
We show almost sure convergence of the sequence and optimal sublinear convergence rates for the primal-dual gap and objective values, in the general convex-concave case.
arXiv Detail & Related papers (2020-07-13T17:39:35Z) - Learning Inconsistent Preferences with Gaussian Processes [14.64963271587818]
We revisit widely used preferential Gaussian processes by Chu et al.(2005) and challenge their modelling assumption that imposes rankability of data items via latent utility function values.
We propose a generalisation of pgp which can capture more expressive latent preferential structures in the data.
Our experimental findings support the conjecture that violations of rankability are ubiquitous in real-world preferential data.
arXiv Detail & Related papers (2020-06-06T11:57:45Z) - Asymptotic Analysis of an Ensemble of Randomly Projected Linear
Discriminants [94.46276668068327]
In [1], an ensemble of randomly projected linear discriminants is used to classify datasets.
We develop a consistent estimator of the misclassification probability as an alternative to the computationally-costly cross-validation estimator.
We also demonstrate the use of our estimator for tuning the projection dimension on both real and synthetic data.
arXiv Detail & Related papers (2020-04-17T12:47:04Z) - Linear predictor on linearly-generated data with missing values: non
consistency and solutions [0.0]
We study the seemingly-simple case where the target to predict is a linear function of the fully-observed data.
We show that, in the presence of missing values, the optimal predictor may not be linear.
arXiv Detail & Related papers (2020-02-03T11:49:35Z) - Invariant Feature Coding using Tensor Product Representation [75.62232699377877]
We prove that the group-invariant feature vector contains sufficient discriminative information when learning a linear classifier.
A novel feature model that explicitly consider group action is proposed for principal component analysis and k-means clustering.
arXiv Detail & Related papers (2019-06-05T07:15:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.