Diversity sampling is an implicit regularization for kernel methods
- URL: http://arxiv.org/abs/2002.08616v1
- Date: Thu, 20 Feb 2020 08:24:42 GMT
- Title: Diversity sampling is an implicit regularization for kernel methods
- Authors: Micha\"el Fanuel and Joachim Schreurs and Johan A.K. Suykens
- Abstract summary: We show that Nystr"om kernel regression with diverse landmarks increases the accuracy of the regression in sparser regions of the dataset.
A greedy is also proposed to select diverse samples of significant size within large datasets when exact DPP sampling is not practically feasible.
- Score: 13.136143245702915
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Kernel methods have achieved very good performance on large scale regression
and classification problems, by using the Nystr\"om method and preconditioning
techniques. The Nystr\"om approximation -- based on a subset of landmarks --
gives a low rank approximation of the kernel matrix, and is known to provide a
form of implicit regularization. We further elaborate on the impact of sampling
diverse landmarks for constructing the Nystr\"om approximation in supervised as
well as unsupervised kernel methods. By using Determinantal Point Processes for
sampling, we obtain additional theoretical results concerning the interplay
between diversity and regularization. Empirically, we demonstrate the
advantages of training kernel methods based on subsets made of diverse points.
In particular, if the dataset has a dense bulk and a sparser tail, we show that
Nystr\"om kernel regression with diverse landmarks increases the accuracy of
the regression in sparser regions of the dataset, with respect to a uniform
landmark sampling. A greedy heuristic is also proposed to select diverse
samples of significant size within large datasets when exact DPP sampling is
not practically feasible.
Related papers
- MIK: Modified Isolation Kernel for Biological Sequence Visualization, Classification, and Clustering [3.9146761527401424]
This research proposes a novel approach called the Modified Isolation Kernel (MIK) as an alternative to the Gaussian kernel.
MIK uses adaptive density estimation to capture local structures more accurately and integrates robustness measures.
It exhibits improved preservation of the local and global structure and enables better visualization of clusters and subclusters in the embedded space.
arXiv Detail & Related papers (2024-10-21T06:57:09Z) - A Bayesian Approach Toward Robust Multidimensional Ellipsoid-Specific Fitting [0.0]
This work presents a novel and effective method for fitting multidimensional ellipsoids to scattered data in the contamination of noise and outliers.
We incorporate a uniform prior distribution to constrain the search for primitive parameters within an ellipsoidal domain.
We apply it to a wide range of practical applications such as microscopy cell counting, 3D reconstruction, geometric shape approximation, and magnetometer calibration tasks.
arXiv Detail & Related papers (2024-07-27T14:31:51Z) - Samplet basis pursuit: Multiresolution scattered data approximation with sparsity constraints [0.0]
We consider scattered data approximation in samplet coordinates with $ell_1$-regularization.
By using the Riesz isometry, we embed samplets into reproducing kernel Hilbert spaces.
We argue that the class of signals that are sparse with respect to the embedded samplet basis is considerably larger than the class of signals that are sparse with respect to the basis of kernel translates.
arXiv Detail & Related papers (2023-06-16T21:20:49Z) - Rethinking k-means from manifold learning perspective [122.38667613245151]
We present a new clustering algorithm which directly detects clusters of data without mean estimation.
Specifically, we construct distance matrix between data points by Butterworth filter.
To well exploit the complementary information embedded in different views, we leverage the tensor Schatten p-norm regularization.
arXiv Detail & Related papers (2023-05-12T03:01:41Z) - Adaptive Sketches for Robust Regression with Importance Sampling [64.75899469557272]
We introduce data structures for solving robust regression through gradient descent (SGD)
Our algorithm effectively runs $T$ steps of SGD with importance sampling while using sublinear space and just making a single pass over the data.
arXiv Detail & Related papers (2022-07-16T03:09:30Z) - Local optimisation of Nystr\"om samples through stochastic gradient
descent [32.53634754382956]
We consider an unweighted variation of the squared-kernel discrepancy criterion as a surrogate for the classical criteria used to assess the Nystr"om approximation accuracy.
We perform numerical experiments which demonstrate that the local minimisation of the radial SKD yields Nystr"om samples with improved Nystr"om approximation accuracy.
arXiv Detail & Related papers (2022-03-24T18:17:27Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Determinantal consensus clustering [77.34726150561087]
We propose the use of determinantal point processes or DPP for the random restart of clustering algorithms.
DPPs favor diversity of the center points within subsets.
We show through simulations that, contrary to DPP, this technique fails both to ensure diversity, and to obtain a good coverage of all data facets.
arXiv Detail & Related papers (2021-02-07T23:48:24Z) - Random extrapolation for primal-dual coordinate descent [61.55967255151027]
We introduce a randomly extrapolated primal-dual coordinate descent method that adapts to sparsity of the data matrix and the favorable structures of the objective function.
We show almost sure convergence of the sequence and optimal sublinear convergence rates for the primal-dual gap and objective values, in the general convex-concave case.
arXiv Detail & Related papers (2020-07-13T17:39:35Z) - Robust M-Estimation Based Bayesian Cluster Enumeration for Real
Elliptically Symmetric Distributions [5.137336092866906]
Robustly determining optimal number of clusters in a data set is an essential factor in a wide range of applications.
This article generalizes so that it can be used with any arbitrary Really Symmetric (RES) distributed mixture model.
We derive a robust criterion for data sets with finite sample size, and also provide an approximation to reduce the computational cost at large sample sizes.
arXiv Detail & Related papers (2020-05-04T11:44:49Z) - Improved guarantees and a multiple-descent curve for Column Subset
Selection and the Nystr\"om method [76.73096213472897]
We develop techniques which exploit spectral properties of the data matrix to obtain improved approximation guarantees.
Our approach leads to significantly better bounds for datasets with known rates of singular value decay.
We show that both our improved bounds and the multiple-descent curve can be observed on real datasets simply by varying the RBF parameter.
arXiv Detail & Related papers (2020-02-21T00:43:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.