MCMC-driven importance samplers
- URL: http://arxiv.org/abs/2105.02579v2
- Date: Sun, 9 May 2021 14:15:51 GMT
- Title: MCMC-driven importance samplers
- Authors: F. Llorente, E. Curbelo, L. Martino, V. Elvira, D. Delgado
- Abstract summary: We focus on LAIS, a class of adaptive importance samplers, where Monte Carlo algorithms are employed to drive an underlying multiple importance sampling scheme.
The modular nature of LAIS allows for different possible choices in the upper and lower layers, that will have different performance and computational costs.
Different variants are essential if we aim to address computational challenges arising in real-world applications.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Monte Carlo methods are the standard procedure for estimating complicated
integrals of multidimensional Bayesian posterior distributions. In this work,
we focus on LAIS, a class of adaptive importance samplers where Markov chain
Monte Carlo (MCMC) algorithms are employed to drive an underlying multiple
importance sampling (IS) scheme. Its power lies in the simplicity of the
layered framework: the upper layer locates proposal densities by means of MCMC
algorithms; while the lower layer handles the multiple IS scheme, in order to
compute the final estimators. The modular nature of LAIS allows for different
possible choices in the upper and lower layers, that will have different
performance and computational costs. In this work, we propose different
enhancements in order to increase the efficiency and reduce the computational
cost, of both upper and lower layers. The different variants are essential if
we aim to address computational challenges arising in real-world applications,
such as highly concentrated posterior distributions (due to large amounts of
data, etc.). Hamiltonian-driven importance samplers are presented and tested.
Furthermore, we introduce different strategies for designing cheaper schemes,
for instance, recycling samples generated in the upper layer and using them in
the final estimators in the lower layer. Numerical experiments show the
benefits of the proposed schemes as compared to the vanilla version of LAIS and
other benchmark methods.
Related papers
- Accelerating Multilevel Markov Chain Monte Carlo Using Machine Learning Models [0.0]
We present an efficient approach for accelerating multilevel Markov Chain Monte Carlo (MCMC) sampling for large-scale problems.
We use low-fidelity machine learning models for inexpensive evaluation of proposed samples.
Our technique is demonstrated on a standard benchmark inference problem in groundwater flow.
arXiv Detail & Related papers (2024-05-18T05:13:11Z) - LRP-QViT: Mixed-Precision Vision Transformer Quantization via Layer-wise
Relevance Propagation [0.0]
We introduce LRP-QViT, an explainability-based method for assigning mixed-precision bit allocations to different layers based on their importance during classification.
Our experimental findings demonstrate that both our fixed-bit and mixed-bit post-training quantization methods surpass existing models in the context of 4-bit and 6-bit quantization.
arXiv Detail & Related papers (2024-01-20T14:53:19Z) - Minimally Supervised Learning using Topological Projections in
Self-Organizing Maps [55.31182147885694]
We introduce a semi-supervised learning approach based on topological projections in self-organizing maps (SOMs)
Our proposed method first trains SOMs on unlabeled data and then a minimal number of available labeled data points are assigned to key best matching units (BMU)
Our results indicate that the proposed minimally supervised model significantly outperforms traditional regression techniques.
arXiv Detail & Related papers (2024-01-12T22:51:48Z) - ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language
Models [70.45441031021291]
Large Vision-Language Models (LVLMs) can understand the world comprehensively by integrating rich information from different modalities.
LVLMs are often problematic due to their massive computational/energy costs and carbon consumption.
We propose Efficient Coarse-to-Fine LayerWise Pruning (ECoFLaP), a two-stage coarse-to-fine weight pruning approach for LVLMs.
arXiv Detail & Related papers (2023-10-04T17:34:00Z) - Provable and Practical: Efficient Exploration in Reinforcement Learning via Langevin Monte Carlo [104.9535542833054]
We present a scalable and effective exploration strategy based on Thompson sampling for reinforcement learning (RL)
We instead directly sample the Q function from its posterior distribution, by using Langevin Monte Carlo.
Our approach achieves better or similar results compared with state-of-the-art deep RL algorithms on several challenging exploration tasks from the Atari57 suite.
arXiv Detail & Related papers (2023-05-29T17:11:28Z) - A multilevel reinforcement learning framework for PDE based control [0.2538209532048867]
Reinforcement learning (RL) is a promising method to solve control problems.
Model-free RL algorithms are sample inefficient and require thousands if not millions of samples to learn optimal control policies.
We propose a multilevel RL framework in order to ease this cost by exploiting sublevel models that correspond to coarser scale discretization.
arXiv Detail & Related papers (2022-10-15T23:52:48Z) - Towards Automated Imbalanced Learning with Deep Hierarchical
Reinforcement Learning [57.163525407022966]
Imbalanced learning is a fundamental challenge in data mining, where there is a disproportionate ratio of training samples in each class.
Over-sampling is an effective technique to tackle imbalanced learning through generating synthetic samples for the minority class.
We propose AutoSMOTE, an automated over-sampling algorithm that can jointly optimize different levels of decisions.
arXiv Detail & Related papers (2022-08-26T04:28:01Z) - Low-variance estimation in the Plackett-Luce model via quasi-Monte Carlo
sampling [58.14878401145309]
We develop a novel approach to producing more sample-efficient estimators of expectations in the PL model.
We illustrate our findings both theoretically and empirically using real-world recommendation data from Amazon Music and the Yahoo learning-to-rank challenge.
arXiv Detail & Related papers (2022-05-12T11:15:47Z) - Faster One-Sample Stochastic Conditional Gradient Method for Composite
Convex Minimization [61.26619639722804]
We propose a conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.
The proposed method, equipped with an average gradient (SAG) estimator, requires only one sample per iteration. Nevertheless, it guarantees fast convergence rates on par with more sophisticated variance reduction techniques.
arXiv Detail & Related papers (2022-02-26T19:10:48Z) - A Survey of Monte Carlo Methods for Parameter Estimation [0.0]
This paper reviews Monte Carlo (MC) methods for the estimation of static parameters in signal processing applications.
A historical note on the development of MC schemes is also provided, followed by the basic MC method and a brief description of the rejection sampling (RS) algorithm.
arXiv Detail & Related papers (2021-07-25T14:57:58Z) - Unsupervised learning of disentangled representations in deep restricted
kernel machines with orthogonality constraints [15.296955630621566]
Constr-DRKM is a deep kernel method for the unsupervised learning of disentangled data representations.
We quantitatively evaluate the proposed method's effectiveness in disentangled feature learning.
arXiv Detail & Related papers (2020-11-25T11:40:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.