Tuning diagonal scale matrices for HMC
- URL: http://arxiv.org/abs/2403.07495v1
- Date: Tue, 12 Mar 2024 10:35:40 GMT
- Title: Tuning diagonal scale matrices for HMC
- Authors: Jimmy Huy Tran and Tore Selland Kleppe
- Abstract summary: Three approaches for adaptively tuning diagonal scale matrices for HMC are discussed and compared.
The common practice of scaling according to estimated marginal standard deviations is taken as a benchmark.
Scaling according to the mean log-target gradient (ISG) and a scaling method targeting that the frequency of when the underlying Hamiltonian dynamics crosses the respective medians are alternatives.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Three approaches for adaptively tuning diagonal scale matrices for HMC are
discussed and compared. The common practice of scaling according to estimated
marginal standard deviations is taken as a benchmark. Scaling according to the
mean log-target gradient (ISG), and a scaling method targeting that the
frequency of when the underlying Hamiltonian dynamics crosses the respective
medians should be uniform across dimensions, are taken as alternatives.
Numerical studies suggest that the ISG method leads in many cases to more
efficient sampling than the benchmark, in particular in cases with strong
correlations or non-linear dependencies. The ISG method is also easy to
implement, computationally cheap and would be relatively simple to include in
automatically tuned codes as an alternative to the benchmark practice.
Related papers
- Pushing the Limits of Large Language Model Quantization via the Linearity Theorem [71.3332971315821]
We present a "line theoremarity" establishing a direct relationship between the layer-wise $ell$ reconstruction error and the model perplexity increase due to quantization.
This insight enables two novel applications: (1) a simple data-free LLM quantization method using Hadamard rotations and MSE-optimal grids, dubbed HIGGS, and (2) an optimal solution to the problem of finding non-uniform per-layer quantization levels.
arXiv Detail & Related papers (2024-11-26T15:35:44Z) - Linear-cost unbiased posterior estimates for crossed effects and matrix factorization models via couplings [0.0]
We design and analyze unbiased Markov chain Monte Carlo schemes based on couplings of blocked Gibbs samplers (BGSs)
Our methodology is designed for and applicable to high-dimensional BGS with conditionally independent blocks.
arXiv Detail & Related papers (2024-10-11T16:05:01Z) - Compound Batch Normalization for Long-tailed Image Classification [77.42829178064807]
We propose a compound batch normalization method based on a Gaussian mixture.
It can model the feature space more comprehensively and reduce the dominance of head classes.
The proposed method outperforms existing methods on long-tailed image classification.
arXiv Detail & Related papers (2022-12-02T07:31:39Z) - Preferential Subsampling for Stochastic Gradient Langevin Dynamics [3.158346511479111]
gradient MCMC offers an unbiased estimate of the gradient of the log-posterior with a small, uniformly-weighted subsample of the data.
The resulting gradient estimator may exhibit a high variance and impact sampler performance.
We demonstrate that such an approach can maintain the same level of accuracy while substantially reducing the average subsample size that is used.
arXiv Detail & Related papers (2022-10-28T14:56:18Z) - Optimization of Annealed Importance Sampling Hyperparameters [77.34726150561087]
Annealed Importance Sampling (AIS) is a popular algorithm used to estimates the intractable marginal likelihood of deep generative models.
We present a parameteric AIS process with flexible intermediary distributions and optimize the bridging distributions to use fewer number of steps for sampling.
We assess the performance of our optimized AIS for marginal likelihood estimation of deep generative models and compare it to other estimators.
arXiv Detail & Related papers (2022-09-27T07:58:25Z) - Faster One-Sample Stochastic Conditional Gradient Method for Composite
Convex Minimization [61.26619639722804]
We propose a conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.
The proposed method, equipped with an average gradient (SAG) estimator, requires only one sample per iteration. Nevertheless, it guarantees fast convergence rates on par with more sophisticated variance reduction techniques.
arXiv Detail & Related papers (2022-02-26T19:10:48Z) - Stochastic Gradient MCMC with Multi-Armed Bandit Tuning [2.2559617939136505]
We propose a novel bandit-based algorithm that tunes SGMCMC hyperparameters to maximize the accuracy of the posterior approximation.
We support our results with experiments on both simulated and real datasets, and find that this method is practical for a wide range of application areas.
arXiv Detail & Related papers (2021-05-27T11:00:31Z) - Benign Overfitting of Constant-Stepsize SGD for Linear Regression [122.70478935214128]
inductive biases are central in preventing overfitting empirically.
This work considers this issue in arguably the most basic setting: constant-stepsize SGD for linear regression.
We reflect on a number of notable differences between the algorithmic regularization afforded by (unregularized) SGD in comparison to ordinary least squares.
arXiv Detail & Related papers (2021-03-23T17:15:53Z) - An adaptive Hessian approximated stochastic gradient MCMC method [12.93317525451798]
We present an adaptive Hessian approximated gradient MCMC method to incorporate local geometric information while sampling from the posterior.
We adopt a magnitude-based weight pruning method to enforce the sparsity of the network.
arXiv Detail & Related papers (2020-10-03T16:22:15Z) - Improving Sampling Accuracy of Stochastic Gradient MCMC Methods via
Non-uniform Subsampling of Gradients [54.90670513852325]
We propose a non-uniform subsampling scheme to improve the sampling accuracy.
EWSG is designed so that a non-uniform gradient-MCMC method mimics the statistical behavior of a batch-gradient-MCMC method.
In our practical implementation of EWSG, the non-uniform subsampling is performed efficiently via a Metropolis-Hastings chain on the data index.
arXiv Detail & Related papers (2020-02-20T18:56:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.