Generalized mean shift with triangular kernel profile
- URL: http://arxiv.org/abs/2001.02165v1
- Date: Tue, 7 Jan 2020 16:46:32 GMT
- Title: Generalized mean shift with triangular kernel profile
- Authors: S\'ebastien Razakarivony and Axel Barrau
- Abstract summary: Mean Shift algorithm is a popular way to find modes of some probability density functions taking a specific kernel-based shape.
We show that a novel Mean Shift variant adapted to them can be derived, and proved to converge after a finite number of iterations.
- Score: 5.381004207943597
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The mean shift algorithm is a popular way to find modes of some probability
density functions taking a specific kernel-based shape, used for clustering or
visual tracking. Since its introduction, it underwent several practical
improvements and generalizations, as well as deep theoretical analysis mainly
focused on its convergence properties. In spite of encouraging results, this
question has not received a clear general answer yet. In this paper we focus on
a specific class of kernels, adapted in particular to the distributions
clustering applications which motivated this work. We show that a novel Mean
Shift variant adapted to them can be derived, and proved to converge after a
finite number of iterations. In order to situate this new class of methods in
the general picture of the Mean Shift theory, we alo give a synthetic exposure
of existing results of this field.
Related papers
- Learning to Embed Distributions via Maximum Kernel Entropy [0.0]
Emprimiical data can often be considered as samples from a set of probability distributions.
Kernel methods have emerged as a natural approach for learning to classify these distributions.
We propose a novel objective for the unsupervised learning of data-dependent distribution kernel.
arXiv Detail & Related papers (2024-08-01T13:34:19Z) - Learning Invariant Molecular Representation in Latent Discrete Space [52.13724532622099]
We propose a new framework for learning molecular representations that exhibit invariance and robustness against distribution shifts.
Our model achieves stronger generalization against state-of-the-art baselines in the presence of various distribution shifts.
arXiv Detail & Related papers (2023-10-22T04:06:44Z) - Single Domain Generalization via Normalised Cross-correlation Based
Convolutions [14.306250516592304]
Single Domain Generalization aims to train robust models using data from a single source.
We propose a novel operator called XCNorm that computes the normalized cross-correlation between weights and an input feature patch.
We show that deep neural networks composed of this operator are robust to common semantic distribution shifts.
arXiv Detail & Related papers (2023-07-12T04:15:36Z) - Equivariance with Learned Canonicalization Functions [77.32483958400282]
We show that learning a small neural network to perform canonicalization is better than using predefineds.
Our experiments show that learning the canonicalization function is competitive with existing techniques for learning equivariant functions across many tasks.
arXiv Detail & Related papers (2022-11-11T21:58:15Z) - Instance-Dependent Generalization Bounds via Optimal Transport [51.71650746285469]
Existing generalization bounds fail to explain crucial factors that drive the generalization of modern neural networks.
We derive instance-dependent generalization bounds that depend on the local Lipschitz regularity of the learned prediction function in the data space.
We empirically analyze our generalization bounds for neural networks, showing that the bound values are meaningful and capture the effect of popular regularization methods during training.
arXiv Detail & Related papers (2022-11-02T16:39:42Z) - Adaptative clustering by minimization of the mixing entropy criterion [0.0]
We present a clustering method and provide an explanation to a phenomenon encountered in the applied statistical literature since the 1990's.
This phenomenon is the natural adaptability of the order when using a clustering method derived from the famous EM algorithm.
We define a new statistic, the relative entropic order, that represents the number of clumps in the target distribution.
arXiv Detail & Related papers (2022-03-22T07:47:02Z) - On the Benefits of Large Learning Rates for Kernel Methods [110.03020563291788]
We show that a phenomenon can be precisely characterized in the context of kernel methods.
We consider the minimization of a quadratic objective in a separable Hilbert space, and show that with early stopping, the choice of learning rate influences the spectral decomposition of the obtained solution.
arXiv Detail & Related papers (2022-02-28T13:01:04Z) - Provably Strict Generalisation Benefit for Invariance in Kernel Methods [0.0]
We build on the function space perspective of Elesedy and Zaidi arXiv:2102.10333 to derive a strictly non-zero generalisation benefit.
We find that generalisation is governed by a notion of effective dimension that arises from the interplay between the kernel and the group.
arXiv Detail & Related papers (2021-06-04T08:55:28Z) - Pathwise Conditioning of Gaussian Processes [72.61885354624604]
Conventional approaches for simulating Gaussian process posteriors view samples as draws from marginal distributions of process values at finite sets of input locations.
This distribution-centric characterization leads to generative strategies that scale cubically in the size of the desired random vector.
We show how this pathwise interpretation of conditioning gives rise to a general family of approximations that lend themselves to efficiently sampling Gaussian process posteriors.
arXiv Detail & Related papers (2020-11-08T17:09:37Z) - GANs with Variational Entropy Regularizers: Applications in Mitigating
the Mode-Collapse Issue [95.23775347605923]
Building on the success of deep learning, Generative Adversarial Networks (GANs) provide a modern approach to learn a probability distribution from observed samples.
GANs often suffer from the mode collapse issue where the generator fails to capture all existing modes of the input distribution.
We take an information-theoretic approach and maximize a variational lower bound on the entropy of the generated samples to increase their diversity.
arXiv Detail & Related papers (2020-09-24T19:34:37Z) - Convex Representation Learning for Generalized Invariance in
Semi-Inner-Product Space [32.442549424823355]
In this work we develop an algorithm for a variety of generalized representations in a semi-norms that representers in a lead, and bounds are established.
This allows in representations to be learned efficiently and effectively as confirmed in our experiments along with accurate predictions.
arXiv Detail & Related papers (2020-04-25T18:54:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.