LITE: Efficiently Estimating Gaussian Probability of Maximality
- URL: http://arxiv.org/abs/2501.13535v2
- Date: Sat, 15 Feb 2025 16:41:42 GMT
- Title: LITE: Efficiently Estimating Gaussian Probability of Maximality
- Authors: Nicolas Menet, Jonas Hübotter, Parnian Kassraie, Andreas Krause,
- Abstract summary: We consider the problem of computing the probability of maximality (PoM) of a Gaussian random vector, i.e., the probability for each dimension to be maximal.<n>Existing techniques are costly, scalingly in complexity and memory with the vector size.<n>We introduce LITE, the first approach for estimating Gaussian PoM with almost-linear time and memory.
- Score: 38.38128700457187
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of computing the probability of maximality (PoM) of a Gaussian random vector, i.e., the probability for each dimension to be maximal. This is a key challenge in applications ranging from Bayesian optimization to reinforcement learning, where the PoM not only helps with finding an optimal action, but yields a fine-grained analysis of the action domain, crucial in tasks such as drug discovery. Existing techniques are costly, scaling polynomially in computation and memory with the vector size. We introduce LITE, the first approach for estimating Gaussian PoM with almost-linear time and memory complexity. LITE achieves SOTA accuracy on a number of tasks, while being in practice several orders of magnitude faster than the baselines. This also translates to a better performance on downstream tasks such as entropy estimation and optimal control of bandits. Theoretically, we cast LITE as entropy-regularized UCB and connect it to prior PoM estimators.
Related papers
- Rethinking Approximate Gaussian Inference in Classification [25.021782278452005]
In classification tasks, softmax functions are ubiquitously used to produce predictive probabilities.
We propose a simple change in the learning objective which allows the exact computation of predictives.
We evaluate our approach combined with several approximate Gaussian inference methods on large- and small-scale datasets.
arXiv Detail & Related papers (2025-02-05T17:03:49Z) - Adaptive Sampled Softmax with Inverted Multi-Index: Methods, Theory and Applications [79.53938312089308]
The MIDX-Sampler is a novel adaptive sampling strategy based on an inverted multi-index approach.<n>Our method is backed by rigorous theoretical analysis, addressing key concerns such as sampling bias, gradient bias, convergence rates, and generalization error bounds.
arXiv Detail & Related papers (2025-01-15T04:09:21Z) - Training normalizing flows with computationally intensive target
probability distributions [0.018416014644193065]
We propose an estimator for normalizing flows based on the REINFORCE algorithm.
It is up to ten times faster in terms of the wall-clock time and requires up to $30%$ less memory.
arXiv Detail & Related papers (2023-08-25T10:40:46Z) - Gaussian Max-Value Entropy Search for Multi-Agent Bayesian Optimization [4.8244546750150965]
We study the multi-agent Bayesian optimization (BO) problem, where multiple agents maximize a black-box function via iterative queries.
One of the main challenges of Entropy Search (ES) is that calculating the mutual information requires computationally-costly approximation techniques.
We propose the Gaussian Max-value Entropy Search, a multi-agent BO algorithm with favorable sample and computational efficiency.
arXiv Detail & Related papers (2023-03-10T04:14:30Z) - Estimating a potential without the agony of the partition function [5.994412766684842]
Estimating a Gibbs density function given a sample is an important problem in computational statistics and statistical learning.
We propose an alternative approach based on Maximum A-Posteriori (MAP) estimators.
arXiv Detail & Related papers (2022-08-19T16:27:02Z) - Beyond EM Algorithm on Over-specified Two-Component Location-Scale
Gaussian Mixtures [29.26015093627193]
We develop the Exponential Location Update (ELU) algorithm to efficiently explore the curvature of the negative log-likelihood functions.
We demonstrate that the ELU algorithm converges to the final statistical radius of the models after a logarithmic number of iterations.
arXiv Detail & Related papers (2022-05-23T06:49:55Z) - Outlier-Robust Sparse Estimation via Non-Convex Optimization [73.18654719887205]
We explore the connection between high-dimensional statistics and non-robust optimization in the presence of sparsity constraints.
We develop novel and simple optimization formulations for these problems.
As a corollary, we obtain that any first-order method that efficiently converges to station yields an efficient algorithm for these tasks.
arXiv Detail & Related papers (2021-09-23T17:38:24Z) - Efficient semidefinite-programming-based inference for binary and
multi-class MRFs [83.09715052229782]
We propose an efficient method for computing the partition function or MAP estimate in a pairwise MRF.
We extend semidefinite relaxations from the typical binary MRF to the full multi-class setting, and develop a compact semidefinite relaxation that can again be solved efficiently using the solver.
arXiv Detail & Related papers (2020-12-04T15:36:29Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.