Distributed Learning via Filtered Hyperinterpolation on Manifolds
- URL: http://arxiv.org/abs/2007.09392v1
- Date: Sat, 18 Jul 2020 10:05:18 GMT
- Title: Distributed Learning via Filtered Hyperinterpolation on Manifolds
- Authors: Guido Mont\'ufar, Yu Guang Wang
- Abstract summary: This paper studies the problem of learning real-valued functions on manifold.
Motivated by the problem of handling large data sets, it presents a parallel data processing approach.
We prove quantitative relations between the approximation quality of the learned function over the entire manifold.
- Score: 2.2046162792653017
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning mappings of data on manifolds is an important topic in contemporary
machine learning, with applications in astrophysics, geophysics, statistical
physics, medical diagnosis, biochemistry, 3D object analysis. This paper
studies the problem of learning real-valued functions on manifolds through
filtered hyperinterpolation of input-output data pairs where the inputs may be
sampled deterministically or at random and the outputs may be clean or noisy.
Motivated by the problem of handling large data sets, it presents a parallel
data processing approach which distributes the data-fitting task among multiple
servers and synthesizes the fitted sub-models into a global estimator. We prove
quantitative relations between the approximation quality of the learned
function over the entire manifold, the type of target function, the number of
servers, and the number and type of available samples. We obtain the
approximation rates of convergence for distributed and non-distributed
approaches. For the non-distributed case, the approximation order is optimal.
Related papers
- Learning Divergence Fields for Shift-Robust Graph Representations [73.11818515795761]
In this work, we propose a geometric diffusion model with learnable divergence fields for the challenging problem with interdependent data.
We derive a new learning objective through causal inference, which can guide the model to learn generalizable patterns of interdependence that are insensitive across domains.
arXiv Detail & Related papers (2024-06-07T14:29:21Z) - Learning on manifolds without manifold learning [0.0]
Function approximation based on data drawn randomly from an unknown distribution is an important problem in machine learning.
In this paper, we project the unknown manifold as a submanifold ambient hypersphere and study the question of constructing a one-shot approximation using specially designed kernels on the hypersphere.
arXiv Detail & Related papers (2024-02-20T03:27:53Z) - Manifold Learning with Sparse Regularised Optimal Transport [0.17205106391379024]
Real-world datasets are subject to noisy observations and sampling, so that distilling information about the underlying manifold is a major challenge.
We propose a method for manifold learning that utilises a symmetric version of optimal transport with a quadratic regularisation.
We prove that the resulting kernel is consistent with a Laplace-type operator in the continuous limit, establish robustness to heteroskedastic noise and exhibit these results in simulations.
arXiv Detail & Related papers (2023-07-19T08:05:46Z) - Score-based Diffusion Models in Function Space [140.792362459734]
Diffusion models have recently emerged as a powerful framework for generative modeling.
We introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - Score Approximation, Estimation and Distribution Recovery of Diffusion
Models on Low-Dimensional Data [68.62134204367668]
This paper studies score approximation, estimation, and distribution recovery of diffusion models, when data are supported on an unknown low-dimensional linear subspace.
We show that with a properly chosen neural network architecture, the score function can be both accurately approximated and efficiently estimated.
The generated distribution based on the estimated score function captures the data geometric structures and converges to a close vicinity of the data distribution.
arXiv Detail & Related papers (2023-02-14T17:02:35Z) - Learning from aggregated data with a maximum entropy model [73.63512438583375]
We show how a new model, similar to a logistic regression, may be learned from aggregated data only by approximating the unobserved feature distribution with a maximum entropy hypothesis.
We present empirical evidence on several public datasets that the model learned this way can achieve performances comparable to those of a logistic model trained with the full unaggregated data.
arXiv Detail & Related papers (2022-10-05T09:17:27Z) - A graph representation based on fluid diffusion model for multimodal
data analysis: theoretical aspects and enhanced community detection [14.601444144225875]
We introduce a novel model for graph definition based on fluid diffusion.
Our method is able to strongly outperform state-of-the-art schemes for community detection in multimodal data analysis.
arXiv Detail & Related papers (2021-12-07T16:30:03Z) - Multimodal Data Fusion in High-Dimensional Heterogeneous Datasets via
Generative Models [16.436293069942312]
We are interested in learning probabilistic generative models from high-dimensional heterogeneous data in an unsupervised fashion.
We propose a general framework that combines disparate data types through the exponential family of distributions.
The proposed algorithm is presented in detail for the commonly encountered heterogeneous datasets with real-valued (Gaussian) and categorical (multinomial) features.
arXiv Detail & Related papers (2021-08-27T18:10:31Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z) - Bayesian Sparse Factor Analysis with Kernelized Observations [67.60224656603823]
Multi-view problems can be faced with latent variable models.
High-dimensionality and non-linear issues are traditionally handled by kernel methods.
We propose merging both approaches into single model.
arXiv Detail & Related papers (2020-06-01T14:25:38Z) - Linear predictor on linearly-generated data with missing values: non
consistency and solutions [0.0]
We study the seemingly-simple case where the target to predict is a linear function of the fully-observed data.
We show that, in the presence of missing values, the optimal predictor may not be linear.
arXiv Detail & Related papers (2020-02-03T11:49:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.