Mean shift cluster recognition method implementation in the nested
sampling algorithm
- URL: http://arxiv.org/abs/2002.01431v1
- Date: Fri, 31 Jan 2020 15:04:30 GMT
- Title: Mean shift cluster recognition method implementation in the nested
sampling algorithm
- Authors: M. Trassinelli (INSP-E10, INSP), Pierre Ciccodicola (INSP-E10, INSP)
- Abstract summary: Nested sampling is an efficient algorithm for the calculation of the Bayesian evidence and posterior parameter probability distributions.
Here we present a new solution based on the mean shift cluster recognition method implemented in a random walk search algorithm.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nested sampling is an efficient algorithm for the calculation of the Bayesian
evidence and posterior parameter probability distributions. It is based on the
step-by-step exploration of the parameter space by Monte Carlo sampling with a
series of values sets called live points that evolve towards the region of
interest, i.e. where the likelihood function is maximal. In presence of several
local likelihood maxima, the algorithm converges with difficulty. Some
systematic errors can also be introduced by unexplored parameter volume
regions. In order to avoid this, different methods are proposed in the
literature for an efficient search of new live points, even in presence of
local maxima. Here we present a new solution based on the mean shift cluster
recognition method implemented in a random walk search algorithm. The
clustering recognition is integrated within the Bayesian analysis program
NestedFit. It is tested with the analysis of some difficult cases. Compared to
the analysis results without cluster recognition, the computation time is
considerably reduced. At the same time, the entire parameter space is
efficiently explored, which translates into a smaller uncertainty of the
extracted value of the Bayesian evidence.
Related papers
- Learning to Bound Counterfactual Inference in Structural Causal Models
from Observational and Randomised Data [64.96984404868411]
We derive a likelihood characterisation for the overall data that leads us to extend a previous EM-based algorithm.
The new algorithm learns to approximate the (unidentifiability) region of model parameters from such mixed data sources.
It delivers interval approximations to counterfactual results, which collapse to points in the identifiable case.
arXiv Detail & Related papers (2022-12-06T12:42:11Z) - Lattice-Based Methods Surpass Sum-of-Squares in Clustering [98.46302040220395]
Clustering is a fundamental primitive in unsupervised learning.
Recent work has established lower bounds against the class of low-degree methods.
We show that, perhaps surprisingly, this particular clustering model textitdoes not exhibit a statistical-to-computational gap.
arXiv Detail & Related papers (2021-12-07T18:50:17Z) - Population based change-point detection for the identification of
homozygosity islands [0.0]
We introduce a penalized maximum likelihood approach that can be efficiently computed by a dynamic programming algorithm or approximated by a fast greedy binary splitting algorithm.
We prove both algorithms converge almost surely to the set of change-points under very general assumptions on the distribution and independent sampling of the random vector.
This new approach is motivated by the problem of identifying homozygosity islands on the genome of individuals in a population.
arXiv Detail & Related papers (2021-11-19T12:53:41Z) - Distributed stochastic proximal algorithm with random reshuffling for
non-smooth finite-sum optimization [28.862321453597918]
Non-smooth finite-sum minimization is a fundamental problem in machine learning.
This paper develops a distributed proximal-gradient algorithm with random reshuffling to solve the problem.
arXiv Detail & Related papers (2021-11-06T07:29:55Z) - Local policy search with Bayesian optimization [73.0364959221845]
Reinforcement learning aims to find an optimal policy by interaction with an environment.
Policy gradients for local search are often obtained from random perturbations.
We develop an algorithm utilizing a probabilistic model of the objective function and its gradient.
arXiv Detail & Related papers (2021-06-22T16:07:02Z) - Improved Algorithms for Agnostic Pool-based Active Classification [20.12178157010804]
We consider active learning for binary classification in the agnostic pool-based setting.
Our algorithm is superior to state of the art active learning algorithms on image classification datasets.
arXiv Detail & Related papers (2021-05-13T18:24:30Z) - Approximate Bayesian Computation of B\'ezier Simplices [13.105764669733093]
We extend the B'ezier simplex model to a probabilistic one and propose a new learning algorithm of it.
An experimental evaluation on publicly available problem instances shows that the new algorithm converges on a finite sample.
arXiv Detail & Related papers (2021-04-10T04:20:19Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Spectral Clustering using Eigenspectrum Shape Based Nystrom Sampling [19.675277307158435]
This paper proposes a scalable Nystrom-based clustering algorithm with a new sampling procedure, Centroid Minimum Sum of Squared Similarities (CMS3), and a on when to use it.
Our datasets depends on the eigen spectrum shape of the dataset, and yields competitive low-rank approximations in test compared to the other state-of-the-art methods.
arXiv Detail & Related papers (2020-07-21T17:49:03Z) - Stochastic Saddle-Point Optimization for Wasserstein Barycenters [69.68068088508505]
We consider the populationimation barycenter problem for random probability measures supported on a finite set of points and generated by an online stream of data.
We employ the structure of the problem and obtain a convex-concave saddle-point reformulation of this problem.
In the setting when the distribution of random probability measures is discrete, we propose an optimization algorithm and estimate its complexity.
arXiv Detail & Related papers (2020-06-11T19:40:38Z) - Spatially Adaptive Inference with Stochastic Feature Sampling and
Interpolation [72.40827239394565]
We propose to compute features only at sparsely sampled locations.
We then densely reconstruct the feature map with an efficient procedure.
The presented network is experimentally shown to save substantial computation while maintaining accuracy over a variety of computer vision tasks.
arXiv Detail & Related papers (2020-03-19T15:36:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.