Importance Weighting Approach in Kernel Bayes' Rule
- URL: http://arxiv.org/abs/2202.02474v1
- Date: Sat, 5 Feb 2022 03:06:59 GMT
- Title: Importance Weighting Approach in Kernel Bayes' Rule
- Authors: Liyuan Xu, Yutian Chen, Arnaud Doucet, Arthur Gretton
- Abstract summary: We study a nonparametric approach to Bayesian computation via feature means, where the expectation of prior features is updated to yield expected posterior features.
All quantities involved in the Bayesian update are learned from observed data, making the method entirely model-free.
Our approach is based on importance weighting, which results in superior numerical stability to the existing approach to KBR.
- Score: 43.221685127485735
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study a nonparametric approach to Bayesian computation via feature means,
where the expectation of prior features is updated to yield expected posterior
features, based on regression from kernel or neural net features of the
observations. All quantities involved in the Bayesian update are learned from
observed data, making the method entirely model-free. The resulting algorithm
is a novel instance of a kernel Bayes' rule (KBR). Our approach is based on
importance weighting, which results in superior numerical stability to the
existing approach to KBR, which requires operator inversion. We show the
convergence of the estimator using a novel consistency analysis on the
importance weighting estimator in the infinity norm. We evaluate our KBR on
challenging synthetic benchmarks, including a filtering problem with a
state-space model involving high dimensional image observations. The proposed
method yields uniformly better empirical performance than the existing KBR, and
competitive performance with other competing methods.
Related papers
- Kernel-Based Function Approximation for Average Reward Reinforcement Learning: An Optimist No-Regret Algorithm [11.024396385514864]
We consider kernel-based function for approximation RL in the infinite horizon average reward setting.
We propose an optimistic algorithm, similar to acquisition function based algorithms in the special case of bandits.
arXiv Detail & Related papers (2024-10-30T23:04:10Z) - A variational Bayes approach to debiased inference for low-dimensional parameters in high-dimensional linear regression [2.7498981662768536]
We propose a scalable variational Bayes method for statistical inference in sparse linear regression.
Our approach relies on assigning a mean-field approximation to the nuisance coordinates.
This requires only a preprocessing step and preserves the computational advantages of mean-field variational Bayes.
arXiv Detail & Related papers (2024-06-18T14:27:44Z) - Equation Discovery with Bayesian Spike-and-Slab Priors and Efficient Kernels [57.46832672991433]
We propose a novel equation discovery method based on Kernel learning and BAyesian Spike-and-Slab priors (KBASS)
We use kernel regression to estimate the target function, which is flexible, expressive, and more robust to data sparsity and noises.
We develop an expectation-propagation expectation-maximization algorithm for efficient posterior inference and function estimation.
arXiv Detail & Related papers (2023-10-09T03:55:09Z) - A Mean Field Approach to Empirical Bayes Estimation in High-dimensional
Linear Regression [8.345523969593492]
We study empirical Bayes estimation in high-dimensional linear regression.
We adopt a variational empirical Bayes approach, introduced originally in Carbonetto and Stephens (2012) and Kim et al. (2022).
This provides the first rigorous empirical Bayes method in a high-dimensional regression setting without sparsity.
arXiv Detail & Related papers (2023-09-28T20:51:40Z) - Bayesian Cramér-Rao Bound Estimation with Score-Based Models [3.4480437706804503]
The Bayesian Cram'er-Rao bound (CRB) provides a lower bound on the mean square error of any Bayesian estimator under mild regularity conditions.
This work introduces a new data-driven estimator for the CRB using score matching.
arXiv Detail & Related papers (2023-09-28T00:22:21Z) - A Bayesian Approach to Robust Inverse Reinforcement Learning [54.24816623644148]
We consider a Bayesian approach to offline model-based inverse reinforcement learning (IRL)
The proposed framework differs from existing offline model-based IRL approaches by performing simultaneous estimation of the expert's reward function and subjective model of environment dynamics.
Our analysis reveals a novel insight that the estimated policy exhibits robust performance when the expert is believed to have a highly accurate model of the environment.
arXiv Detail & Related papers (2023-09-15T17:37:09Z) - Consensus-Adaptive RANSAC [104.87576373187426]
We propose a new RANSAC framework that learns to explore the parameter space by considering the residuals seen so far via a novel attention layer.
The attention mechanism operates on a batch of point-to-model residuals, and updates a per-point estimation state to take into account the consensus found through a lightweight one-step transformer.
arXiv Detail & Related papers (2023-07-26T08:25:46Z) - Stochastic Optimization of Areas Under Precision-Recall Curves with
Provable Convergence [66.83161885378192]
Area under ROC (AUROC) and precision-recall curves (AUPRC) are common metrics for evaluating classification performance for imbalanced problems.
We propose a technical method to optimize AUPRC for deep learning.
arXiv Detail & Related papers (2021-04-18T06:22:21Z) - Evaluating probabilistic classifiers: Reliability diagrams and score
decompositions revisited [68.8204255655161]
We introduce the CORP approach, which generates provably statistically Consistent, Optimally binned, and Reproducible reliability diagrams in an automated way.
Corpor is based on non-parametric isotonic regression and implemented via the Pool-adjacent-violators (PAV) algorithm.
arXiv Detail & Related papers (2020-08-07T08:22:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.