Estimating a potential without the agony of the partition function
- URL: http://arxiv.org/abs/2208.09433v1
- Date: Fri, 19 Aug 2022 16:27:02 GMT
- Title: Estimating a potential without the agony of the partition function
- Authors: Eldad Haber, Moshe Eliasof, Luis Tenorio
- Abstract summary: Estimating a Gibbs density function given a sample is an important problem in computational statistics and statistical learning.
We propose an alternative approach based on Maximum A-Posteriori (MAP) estimators.
- Score: 5.994412766684842
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Estimating a Gibbs density function given a sample is an important problem in
computational statistics and statistical learning. Although the well
established maximum likelihood method is commonly used, it requires the
computation of the partition function (i.e., the normalization of the density).
This function can be easily calculated for simple low-dimensional problems
but its computation is difficult or even intractable for general densities and
high-dimensional problems. In this paper we propose an alternative approach
based on Maximum A-Posteriori (MAP) estimators, we name Maximum Recovery MAP
(MR-MAP), to derive estimators that do not require the computation of the
partition function, and reformulate the problem as an optimization problem. We
further propose a least-action type potential that allows us to quickly solve
the optimization problem as a feed-forward hyperbolic neural network. We
demonstrate the effectiveness of our methods on some standard data sets.
Related papers
- Projecting basis functions with tensor networks for Gaussian process
regression [5.482420806459269]
We develop an approach that allows us to use an exponential amount of basis functions without the corresponding exponential computational complexity.
We project the resulting weights back to the original space to make GP predictions.
In an experiment with an 18-dimensional benchmark data set, we show the applicability of our method to an inverse dynamics problem.
arXiv Detail & Related papers (2023-10-31T16:59:07Z) - Efficient Model-Free Exploration in Low-Rank MDPs [76.87340323826945]
Low-Rank Markov Decision Processes offer a simple, yet expressive framework for RL with function approximation.
Existing algorithms are either (1) computationally intractable, or (2) reliant upon restrictive statistical assumptions.
We propose the first provably sample-efficient algorithm for exploration in Low-Rank MDPs.
arXiv Detail & Related papers (2023-07-08T15:41:48Z) - Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization [73.80101701431103]
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
We study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility.
arXiv Detail & Related papers (2023-04-17T14:23:43Z) - Kernel-based off-policy estimation without overlap: Instance optimality
beyond semiparametric efficiency [53.90687548731265]
We study optimal procedures for estimating a linear functional based on observational data.
For any convex and symmetric function class $mathcalF$, we derive a non-asymptotic local minimax bound on the mean-squared error.
arXiv Detail & Related papers (2023-01-16T02:57:37Z) - Computational-Statistical Gaps in Reinforcement Learning [23.517741855454044]
We show a reduction from Unique-Sat, where we convert a CNF formula into an MDP Hypothesis with deterministic transitions, constant number of actions and low dimensional linear optimal value functions.
This result also exhibits the first computational-statistical gap in reinforcement learning with linear function approximation.
arXiv Detail & Related papers (2022-02-11T04:48:35Z) - Bayesian Algorithm Execution: Estimating Computable Properties of
Black-box Functions Using Mutual Information [78.78486761923855]
In many real world problems, we want to infer some property of an expensive black-box function f, given a budget of T function evaluations.
We present a procedure, InfoBAX, that sequentially chooses queries that maximize mutual information with respect to the algorithm's output.
On these problems, InfoBAX uses up to 500 times fewer queries to f than required by the original algorithm.
arXiv Detail & Related papers (2021-04-19T17:22:11Z) - Offline Model-Based Optimization via Normalized Maximum Likelihood
Estimation [101.22379613810881]
We consider data-driven optimization problems where one must maximize a function given only queries at a fixed set of points.
This problem setting emerges in many domains where function evaluation is a complex and expensive process.
We propose a tractable approximation that allows us to scale our method to high-capacity neural network models.
arXiv Detail & Related papers (2021-02-16T06:04:27Z) - Efficient semidefinite-programming-based inference for binary and
multi-class MRFs [83.09715052229782]
We propose an efficient method for computing the partition function or MAP estimate in a pairwise MRF.
We extend semidefinite relaxations from the typical binary MRF to the full multi-class setting, and develop a compact semidefinite relaxation that can again be solved efficiently using the solver.
arXiv Detail & Related papers (2020-12-04T15:36:29Z) - First Order Methods take Exponential Time to Converge to Global
Minimizers of Non-Convex Functions [28.776950569604026]
In this work, we provide bounds on the fundamental hardness of a non convex function.
We show that the parameter estimation problem is equivalent to the problem of family identification.
arXiv Detail & Related papers (2020-02-28T18:28:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.