Optimal quantisation of probability measures using maximum mean
discrepancy
- URL: http://arxiv.org/abs/2010.07064v4
- Date: Fri, 12 Feb 2021 11:40:18 GMT
- Title: Optimal quantisation of probability measures using maximum mean
discrepancy
- Authors: Onur Teymur, Jackson Gorham, Marina Riabiz and Chris. J. Oates
- Abstract summary: Several researchers have proposed minimisation of maximum mean discrepancy (MMD) as a method to quantise probability measures.
We consider sequential algorithms that greedily minimise MMD over a discrete candidate set.
We investigate a variant that applies this technique to a mini-batch of the candidate set at each iteration.
- Score: 10.29438865750845
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Several researchers have proposed minimisation of maximum mean discrepancy
(MMD) as a method to quantise probability measures, i.e., to approximate a
target distribution by a representative point set. We consider sequential
algorithms that greedily minimise MMD over a discrete candidate set. We propose
a novel non-myopic algorithm and, in order to both improve statistical
efficiency and reduce computational cost, we investigate a variant that applies
this technique to a mini-batch of the candidate set at each iteration. When the
candidate points are sampled from the target, the consistency of these new
algorithm - and their mini-batch variants - is established. We demonstrate the
algorithms on a range of important computational problems, including
optimisation of nodes in Bayesian cubature and the thinning of Markov chain
output.
Related papers
- Best Arm Identification with Fixed Budget: A Large Deviation Perspective [54.305323903582845]
We present sred, a truly adaptive algorithm that can reject arms in it any round based on the observed empirical gaps between the rewards of various arms.
In particular, we present sred, a truly adaptive algorithm that can reject arms in it any round based on the observed empirical gaps between the rewards of various arms.
arXiv Detail & Related papers (2023-12-19T13:17:43Z) - Algorithme EM r\'egularis\'e [0.0]
This paper presents a regularized version of the EM algorithm that efficiently uses prior knowledge to cope with a small sample size.
Experiments on real data highlight the good performance of the proposed algorithm for clustering purposes.
arXiv Detail & Related papers (2023-07-04T23:19:25Z) - Faster One-Sample Stochastic Conditional Gradient Method for Composite
Convex Minimization [61.26619639722804]
We propose a conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.
The proposed method, equipped with an average gradient (SAG) estimator, requires only one sample per iteration. Nevertheless, it guarantees fast convergence rates on par with more sophisticated variance reduction techniques.
arXiv Detail & Related papers (2022-02-26T19:10:48Z) - Distributed stochastic proximal algorithm with random reshuffling for
non-smooth finite-sum optimization [28.862321453597918]
Non-smooth finite-sum minimization is a fundamental problem in machine learning.
This paper develops a distributed proximal-gradient algorithm with random reshuffling to solve the problem.
arXiv Detail & Related papers (2021-11-06T07:29:55Z) - Parallel Stochastic Mirror Descent for MDPs [72.75921150912556]
We consider the problem of learning the optimal policy for infinite-horizon Markov decision processes (MDPs)
Some variant of Mirror Descent is proposed for convex programming problems with Lipschitz-continuous functionals.
We analyze this algorithm in a general case and obtain an estimate of the convergence rate that does not accumulate errors during the operation of the method.
arXiv Detail & Related papers (2021-02-27T19:28:39Z) - Performance analysis of greedy algorithms for minimising a Maximum Mean
Discrepancy [0.0]
We show that the finite-sample-size approximation error, measured by the MMD, decreases as $1/n$ for SBQ.
The upper bound on the approximation error is slightly better for SBQ, but the other methods are significantly faster.
arXiv Detail & Related papers (2021-01-19T11:18:51Z) - Asymptotic study of stochastic adaptive algorithm in non-convex
landscape [2.1320960069210484]
This paper studies some assumption properties of adaptive algorithms widely used in optimization and machine learning.
Among them Adagrad and Rmsprop, which are involved in most of the blackbox deep learning algorithms.
arXiv Detail & Related papers (2020-12-10T12:54:45Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - MLE-guided parameter search for task loss minimization in neural
sequence modeling [83.83249536279239]
Neural autoregressive sequence models are used to generate sequences in a variety of natural language processing (NLP) tasks.
We propose maximum likelihood guided parameter search (MGS), which samples from a distribution over update directions that is a mixture of random search around the current parameters and around the maximum likelihood gradient.
Our experiments show that MGS is capable of optimizing sequence-level losses, with substantial reductions in repetition and non-termination in sequence completion, and similar improvements to those of minimum risk training in machine translation.
arXiv Detail & Related papers (2020-06-04T22:21:22Z) - Extreme Algorithm Selection With Dyadic Feature Representation [78.13985819417974]
We propose the setting of extreme algorithm selection (XAS) where we consider fixed sets of thousands of candidate algorithms.
We assess the applicability of state-of-the-art AS techniques to the XAS setting and propose approaches leveraging a dyadic feature representation.
arXiv Detail & Related papers (2020-01-29T09:40:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.