Tractable Uncertainty for Structure Learning
- URL: http://arxiv.org/abs/2204.14170v1
- Date: Fri, 29 Apr 2022 15:54:39 GMT
- Title: Tractable Uncertainty for Structure Learning
- Authors: Benjie Wang, Matthew Wicker, Marta Kwiatkowska
- Abstract summary: We present Tractable Uncertainty for STructure, a framework for approximate posterior inference.
Probability circuits can be used as an augmented representation for structure learning methods.
- Score: 21.46601360284884
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bayesian structure learning allows one to capture uncertainty over the causal
directed acyclic graph (DAG) responsible for generating given data. In this
work, we present Tractable Uncertainty for STructure learning (TRUST), a
framework for approximate posterior inference that relies on probabilistic
circuits as the representation of our posterior belief. In contrast to
sample-based posterior approximations, our representation can capture a much
richer space of DAGs, while being able to tractably answer a range of useful
inference queries. We empirically show how probabilistic circuits can be used
as an augmented representation for structure learning methods, leading to
improvement in both the quality of inferred structures and posterior
uncertainty. Experimental results also demonstrate the improved
representational capacity of TRUST, outperforming competing methods on
conditional query answering.
Related papers
- Provable Guarantees for Generative Behavior Cloning: Bridging Low-Level
Stability and High-Level Behavior [51.60683890503293]
We propose a theoretical framework for studying behavior cloning of complex expert demonstrations using generative modeling.
We show that pure supervised cloning can generate trajectories matching the per-time step distribution of arbitrary expert trajectories.
arXiv Detail & Related papers (2023-07-27T04:27:26Z) - When Does Confidence-Based Cascade Deferral Suffice? [69.28314307469381]
Cascades are a classical strategy to enable inference cost to vary adaptively across samples.
A deferral rule determines whether to invoke the next classifier in the sequence, or to terminate prediction.
Despite being oblivious to the structure of the cascade, confidence-based deferral often works remarkably well in practice.
arXiv Detail & Related papers (2023-07-06T04:13:57Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - Structure Learning with Continuous Optimization: A Sober Look and Beyond [21.163991683650526]
This paper investigates in which cases continuous optimization for acyclic graph (DAG) structure learning can and cannot perform well.
We provide insights into several aspects of the search procedure, including thresholding and sparsity, and show that they play an important role in the final solutions.
arXiv Detail & Related papers (2023-04-04T22:10:40Z) - Deciphering RNA Secondary Structure Prediction: A Probabilistic K-Rook Matching Perspective [63.3632827588974]
We introduce RFold, a method that learns to predict the most matching K-Rook solution from the given sequence.
RFold achieves competitive performance and about eight times faster inference efficiency than state-of-the-art approaches.
arXiv Detail & Related papers (2022-12-02T16:34:56Z) - Don't Explain Noise: Robust Counterfactuals for Randomized Ensembles [50.81061839052459]
We formalize the generation of robust counterfactual explanations as a probabilistic problem.
We show the link between the robustness of ensemble models and the robustness of base learners.
Our method achieves high robustness with only a small increase in the distance from counterfactual explanations to their initial observations.
arXiv Detail & Related papers (2022-05-27T17:28:54Z) - Distributionally robust risk evaluation with a causality constraint and structural information [0.0]
We approximate test functions by neural networks and prove the sample complexity with Rademacher complexity.
Our framework outperforms the classic counterparts in the distributionally robust portfolio selection problem.
arXiv Detail & Related papers (2022-03-20T14:48:37Z) - A Free Lunch from the Noise: Provable and Practical Exploration for
Representation Learning [55.048010996144036]
We show that under some noise assumption, we can obtain the linear spectral feature of its corresponding Markov transition operator in closed-form for free.
We propose Spectral Dynamics Embedding (SPEDE), which breaks the trade-off and completes optimistic exploration for representation learning by exploiting the structure of the noise.
arXiv Detail & Related papers (2021-11-22T19:24:57Z) - DiBS: Differentiable Bayesian Structure Learning [38.01659425023988]
We propose a general, fully differentiable framework for Bayesian structure learning (DiBS)
DiBS operates in the continuous space of a latent probabilistic graph representation.
Contrary to existing work, DiBS is agnostic to the form of the local conditional distributions.
arXiv Detail & Related papers (2021-05-25T11:23:08Z) - Representations for Stable Off-Policy Reinforcement Learning [37.561660796265]
Reinforcement learning with function approximation can be unstable and even divergent.
We show that non-trivial state representations under which the canonical TD algorithm is stable, even when learning off-policy.
We conclude by empirically demonstrating that these stable representations can be learned using gradient descent.
arXiv Detail & Related papers (2020-07-10T17:55:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.