Distributed Learning of Finite Gaussian Mixtures
- URL: http://arxiv.org/abs/2010.10412v3
- Date: Wed, 10 Nov 2021 20:55:42 GMT
- Title: Distributed Learning of Finite Gaussian Mixtures
- Authors: Qiong Zhang and Jiahua Chen
- Abstract summary: We study split-and-conquer approaches for the distributed learning of finite Gaussian mixtures.
New estimator is shown to be consistent and retains root-n consistency under some general conditions.
Experiments based on simulated and real-world data show that the proposed split-and-conquer approach has comparable statistical performance with the global estimator.
- Score: 21.652015112462
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Advances in information technology have led to extremely large datasets that
are often kept in different storage centers. Existing statistical methods must
be adapted to overcome the resulting computational obstacles while retaining
statistical validity and efficiency. Split-and-conquer approaches have been
applied in many areas, including quantile processes, regression analysis,
principal eigenspaces, and exponential families. We study split-and-conquer
approaches for the distributed learning of finite Gaussian mixtures. We
recommend a reduction strategy and develop an effective MM algorithm. The new
estimator is shown to be consistent and retains root-n consistency under some
general conditions. Experiments based on simulated and real-world data show
that the proposed split-and-conquer approach has comparable statistical
performance with the global estimator based on the full dataset, if the latter
is feasible. It can even slightly outperform the global estimator if the model
assumption does not match the real-world data. It also has better statistical
and computational performance than some existing methods.
Related papers
- Testing Generalizability in Causal Inference [3.547529079746247]
There is no formal procedure for statistically evaluating generalizability in machine learning algorithms.
We propose a systematic and quantitative framework for evaluating model generalizability in causal inference settings.
By basing simulations on real data, our method ensures more realistic evaluations, which is often missing in current work.
arXiv Detail & Related papers (2024-11-05T11:44:00Z) - Uncertainty estimation in satellite precipitation spatial prediction by combining distributional regression algorithms [3.8623569699070353]
We introduce the concept of distributional regression for the engineering task of creating precipitation datasets through data merging.
We propose new ensemble learning methods that can be valuable not only for spatial prediction but also for prediction problems in general.
arXiv Detail & Related papers (2024-06-29T05:58:00Z) - Iterative Methods for Full-Scale Gaussian Process Approximations for Large Spatial Data [9.913418444556486]
We show how iterative methods can be used to reduce the computational costs for calculating likelihoods, gradients, and predictive distributions with FSAs.
We also present a novel, accurate, and fast way to calculate predictive variances relying on estimations and iterative methods.
All methods are implemented in a free C++ software library with high-level Python and R packages.
arXiv Detail & Related papers (2024-05-23T12:25:22Z) - Distributed Learning of Mixtures of Experts [0.0]
We deal with datasets that are either distributed by nature or potentially large for which distributing the computations is usually a standard way to proceed.
We propose a distributed learning approach for mixtures of experts (MoE) models with an aggregation strategy to construct a reduction estimator from local estimators fitted parallelly to distributed subsets of the data.
arXiv Detail & Related papers (2023-12-15T15:26:13Z) - Improved Distribution Matching for Dataset Condensation [91.55972945798531]
We propose a novel dataset condensation method based on distribution matching.
Our simple yet effective method outperforms most previous optimization-oriented methods with much fewer computational resources.
arXiv Detail & Related papers (2023-07-19T04:07:33Z) - Distributed Semi-Supervised Sparse Statistical Inference [6.685997976921953]
A debiased estimator is a crucial tool in statistical inference for high-dimensional model parameters.
Traditional methods require computing a debiased estimator on every machine.
An efficient multi-round distributed debiased estimator, which integrates both labeled and unlabelled data, is developed.
arXiv Detail & Related papers (2023-06-17T17:30:43Z) - DRFLM: Distributionally Robust Federated Learning with Inter-client
Noise via Local Mixup [58.894901088797376]
federated learning has emerged as a promising approach for training a global model using data from multiple organizations without leaking their raw data.
We propose a general framework to solve the above two challenges simultaneously.
We provide comprehensive theoretical analysis including robustness analysis, convergence analysis, and generalization ability.
arXiv Detail & Related papers (2022-04-16T08:08:29Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z) - Real-Time Regression with Dividing Local Gaussian Processes [62.01822866877782]
Local Gaussian processes are a novel, computationally efficient modeling approach based on Gaussian process regression.
Due to an iterative, data-driven division of the input space, they achieve a sublinear computational complexity in the total number of training points in practice.
A numerical evaluation on real-world data sets shows their advantages over other state-of-the-art methods in terms of accuracy as well as prediction and update speed.
arXiv Detail & Related papers (2020-06-16T18:43:31Z) - Instability, Computational Efficiency and Statistical Accuracy [101.32305022521024]
We develop a framework that yields statistical accuracy based on interplay between the deterministic convergence rate of the algorithm at the population level, and its degree of (instability) when applied to an empirical object based on $n$ samples.
We provide applications of our general results to several concrete classes of models, including Gaussian mixture estimation, non-linear regression models, and informative non-response models.
arXiv Detail & Related papers (2020-05-22T22:30:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.