A Deep Learning Method for Comparing Bayesian Hierarchical Models
- URL: http://arxiv.org/abs/2301.11873v4
- Date: Thu, 23 Nov 2023 15:07:41 GMT
- Title: A Deep Learning Method for Comparing Bayesian Hierarchical Models
- Authors: Lasse Elsem\"uller, Martin Schnuerch, Paul-Christian B\"urkner, Stefan
T. Radev
- Abstract summary: We propose a deep learning method for performing Bayesian model comparison on any set of hierarchical models.
Our method enables efficient re-estimation of posterior model probabilities and fast performance validation prior to any real-data application.
- Score: 1.6736940231069393
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Bayesian model comparison (BMC) offers a principled approach for assessing
the relative merits of competing computational models and propagating
uncertainty into model selection decisions. However, BMC is often intractable
for the popular class of hierarchical models due to their high-dimensional
nested parameter structure. To address this intractability, we propose a deep
learning method for performing BMC on any set of hierarchical models which can
be instantiated as probabilistic programs. Since our method enables amortized
inference, it allows efficient re-estimation of posterior model probabilities
and fast performance validation prior to any real-data application. In a series
of extensive validation studies, we benchmark the performance of our method
against the state-of-the-art bridge sampling method and demonstrate excellent
amortized inference across all BMC settings. We then showcase our method by
comparing four hierarchical evidence accumulation models that have previously
been deemed intractable for BMC due to partly implicit likelihoods.
Additionally, we demonstrate how transfer learning can be leveraged to enhance
training efficiency. We provide reproducible code for all analyses and an
open-source implementation of our method.
Related papers
- Learning Energy-Based Models by Cooperative Diffusion Recovery Likelihood [64.95663299945171]
Training energy-based models (EBMs) on high-dimensional data can be both challenging and time-consuming.
There exists a noticeable gap in sample quality between EBMs and other generative frameworks like GANs and diffusion models.
We propose cooperative diffusion recovery likelihood (CDRL), an effective approach to tractably learn and sample from a series of EBMs.
arXiv Detail & Related papers (2023-09-10T22:05:24Z) - Evaluating Representations with Readout Model Switching [19.907607374144167]
In this paper, we propose to use the Minimum Description Length (MDL) principle to devise an evaluation metric.
We design a hybrid discrete and continuous-valued model space for the readout models and employ a switching strategy to combine their predictions.
The proposed metric can be efficiently computed with an online method and we present results for pre-trained vision encoders of various architectures.
arXiv Detail & Related papers (2023-02-19T14:08:01Z) - When to Update Your Model: Constrained Model-based Reinforcement
Learning [50.74369835934703]
We propose a novel and general theoretical scheme for a non-decreasing performance guarantee of model-based RL (MBRL)
Our follow-up derived bounds reveal the relationship between model shifts and performance improvement.
A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns.
arXiv Detail & Related papers (2022-10-15T17:57:43Z) - Making Linear MDPs Practical via Contrastive Representation Learning [101.75885788118131]
It is common to address the curse of dimensionality in Markov decision processes (MDPs) by exploiting low-rank representations.
We consider an alternative definition of linear MDPs that automatically ensures normalization while allowing efficient representation learning.
We demonstrate superior performance over existing state-of-the-art model-based and model-free algorithms on several benchmarks.
arXiv Detail & Related papers (2022-07-14T18:18:02Z) - Sample-Efficient Reinforcement Learning via Conservative Model-Based
Actor-Critic [67.00475077281212]
Model-based reinforcement learning algorithms are more sample efficient than their model-free counterparts.
We propose a novel approach that achieves high sample efficiency without the strong reliance on accurate learned models.
We show that CMBAC significantly outperforms state-of-the-art approaches in terms of sample efficiency on several challenging tasks.
arXiv Detail & Related papers (2021-12-16T15:33:11Z) - Model-free Reinforcement Learning for Branching Markov Decision
Processes [6.402126624793774]
We study reinforcement learning for the optimal control of Branching Markov Decision Processes.
The state of a (discrete-time) BMCs is a collection of entities that, while spawning other entities, generate a payoff.
We generalise model-free reinforcement learning techniques to compute an optimal control strategy of an unknown BMDP in the limit.
arXiv Detail & Related papers (2021-06-12T13:42:15Z) - Probabilistic Case-based Reasoning for Open-World Knowledge Graph
Completion [59.549664231655726]
A case-based reasoning (CBR) system solves a new problem by retrieving cases' that are similar to the given problem.
In this paper, we demonstrate that such a system is achievable for reasoning in knowledge-bases (KBs)
Our approach predicts attributes for an entity by gathering reasoning paths from similar entities in the KB.
arXiv Detail & Related papers (2020-10-07T17:48:12Z) - Control as Hybrid Inference [62.997667081978825]
We present an implementation of CHI which naturally mediates the balance between iterative and amortised inference.
We verify the scalability of our algorithm on a continuous control benchmark, demonstrating that it outperforms strong model-free and model-based baselines.
arXiv Detail & Related papers (2020-07-11T19:44:09Z) - Model Embedding Model-Based Reinforcement Learning [4.566180616886624]
Model-based reinforcement learning (MBRL) has shown its advantages in sample-efficiency over model-free reinforcement learning (MFRL)
Despite the impressive results it achieves, it still faces a trade-off between the ease of data generation and model bias.
We propose a simple and elegant model-embedding model-based reinforcement learning (MEMB) algorithm in the framework of the probabilistic reinforcement learning.
arXiv Detail & Related papers (2020-06-16T15:10:28Z) - Amortized Bayesian model comparison with evidential deep learning [0.12314765641075436]
We propose a novel method for performing Bayesian model comparison using specialized deep learning architectures.
Our method is purely simulation-based and circumvents the step of explicitly fitting all alternative models under consideration to each observed dataset.
We show that our method achieves excellent results in terms of accuracy, calibration, and efficiency across the examples considered in this work.
arXiv Detail & Related papers (2020-04-22T15:15:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.