Approximate Bayesian Computation of B\'ezier Simplices
- URL: http://arxiv.org/abs/2104.04679v2
- Date: Tue, 13 Apr 2021 01:44:44 GMT
- Title: Approximate Bayesian Computation of B\'ezier Simplices
- Authors: Akinori Tanaka, Akiyoshi Sannai, Ken Kobayashi, and Naoki Hamada
- Abstract summary: We extend the B'ezier simplex model to a probabilistic one and propose a new learning algorithm of it.
An experimental evaluation on publicly available problem instances shows that the new algorithm converges on a finite sample.
- Score: 13.105764669733093
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: B\'ezier simplex fitting algorithms have been recently proposed to
approximate the Pareto set/front of multi-objective continuous optimization
problems. These new methods have shown to be successful at approximating
various shapes of Pareto sets/fronts when sample points exactly lie on the
Pareto set/front. However, if the sample points scatter away from the Pareto
set/front, those methods often likely suffer from over-fitting. To overcome
this issue, in this paper, we extend the B\'ezier simplex model to a
probabilistic one and propose a new learning algorithm of it, which falls into
the framework of approximate Bayesian computation (ABC) based on the
Wasserstein distance. We also study the convergence property of the Wasserstein
ABC algorithm. An extensive experimental evaluation on publicly available
problem instances shows that the new algorithm converges on a finite sample.
Moreover, it outperforms the deterministic fitting methods on noisy instances.
Related papers
- Learning Rate Free Sampling in Constrained Domains [21.853333421463603]
We introduce a suite of new particle-based algorithms for sampling in constrained domains which are entirely learning rate free.
We demonstrate the performance of our algorithms on a range of numerical examples, including sampling from targets on the simplex.
arXiv Detail & Related papers (2023-05-24T09:31:18Z) - Min-Max Optimization Made Simple: Approximating the Proximal Point
Method via Contraction Maps [77.8999425439444]
We present a first-order method that admits near-optimal convergence rates for convex/concave min-max problems.
Our work is based on the fact that the update rule of the Proximal Point method can be approximated up to accuracy.
arXiv Detail & Related papers (2023-01-10T12:18:47Z) - Multi-block-Single-probe Variance Reduced Estimator for Coupled
Compositional Optimization [49.58290066287418]
We propose a novel method named Multi-block-probe Variance Reduced (MSVR) to alleviate the complexity of compositional problems.
Our results improve upon prior ones in several aspects, including the order of sample complexities and dependence on strongity.
arXiv Detail & Related papers (2022-07-18T12:03:26Z) - Local policy search with Bayesian optimization [73.0364959221845]
Reinforcement learning aims to find an optimal policy by interaction with an environment.
Policy gradients for local search are often obtained from random perturbations.
We develop an algorithm utilizing a probabilistic model of the objective function and its gradient.
arXiv Detail & Related papers (2021-06-22T16:07:02Z) - Improved Algorithms for Agnostic Pool-based Active Classification [20.12178157010804]
We consider active learning for binary classification in the agnostic pool-based setting.
Our algorithm is superior to state of the art active learning algorithms on image classification datasets.
arXiv Detail & Related papers (2021-05-13T18:24:30Z) - Projection Robust Wasserstein Barycenter [36.97843660480747]
approximating the Wasserstein barycenter is numerically challenging because of the curse of dimensionality.
This paper proposes the projection robust Wasserstein barycenter (PRWB) that mitigates the curse of dimensionality.
arXiv Detail & Related papers (2021-02-05T19:23:35Z) - Byzantine-Resilient Non-Convex Stochastic Gradient Descent [61.6382287971982]
adversary-resilient distributed optimization, in which.
machines can independently compute gradients, and cooperate.
Our algorithm is based on a new concentration technique, and its sample complexity.
It is very practical: it improves upon the performance of all prior methods when no.
setting machines are present.
arXiv Detail & Related papers (2020-12-28T17:19:32Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Stochastic Saddle-Point Optimization for Wasserstein Barycenters [69.68068088508505]
We consider the populationimation barycenter problem for random probability measures supported on a finite set of points and generated by an online stream of data.
We employ the structure of the problem and obtain a convex-concave saddle-point reformulation of this problem.
In the setting when the distribution of random probability measures is discrete, we propose an optimization algorithm and estimate its complexity.
arXiv Detail & Related papers (2020-06-11T19:40:38Z) - Mean shift cluster recognition method implementation in the nested
sampling algorithm [0.0]
Nested sampling is an efficient algorithm for the calculation of the Bayesian evidence and posterior parameter probability distributions.
Here we present a new solution based on the mean shift cluster recognition method implemented in a random walk search algorithm.
arXiv Detail & Related papers (2020-01-31T15:04:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.