Tractable Bounding of Counterfactual Queries by Knowledge Compilation
- URL: http://arxiv.org/abs/2310.03352v1
- Date: Thu, 5 Oct 2023 07:10:40 GMT
- Title: Tractable Bounding of Counterfactual Queries by Knowledge Compilation
- Authors: David Huber, Yizuo Chen, Alessandro Antonucci, Adnan Darwiche, Marco
Zaffalon
- Abstract summary: We discuss the problem of bounding partially identifiable queries, such as counterfactuals, in Pearlian structural causal models.
A recently proposed iterated EM scheme yields an inner approximation of those bounds by sampling the initialisation parameters.
We show how a single symbolic knowledge compilation allows us to obtain the circuit structure with symbolic parameters to be replaced by their actual values.
- Score: 51.47174989680976
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We discuss the problem of bounding partially identifiable queries, such as
counterfactuals, in Pearlian structural causal models. A recently proposed
iterated EM scheme yields an inner approximation of those bounds by sampling
the initialisation parameters. Such a method requires multiple (Bayesian
network) queries over models sharing the same structural equations and
topology, but different exogenous probabilities. This setup makes a compilation
of the underlying model to an arithmetic circuit advantageous, thus inducing a
sizeable inferential speed-up. We show how a single symbolic knowledge
compilation allows us to obtain the circuit structure with symbolic parameters
to be replaced by their actual values when computing the different queries. We
also discuss parallelisation techniques to further speed up the bound
computation. Experiments against standard Bayesian network inference show clear
computational advantages with up to an order of magnitude of speed-up.
Related papers
- Learning to Estimate System Specifications in Linear Temporal Logic using Transformers and Mamba [6.991281327290525]
specification mining involves extracting temporal logic formulae from system traces.
We introduce autore models that can generate linear temporal logic formulae from traces.
We devise a metric for the distinctiveness of the generated formulae and an algorithm to enforce the syntax constraints.
arXiv Detail & Related papers (2024-05-31T15:21:53Z) - Runtime-coherence trade-offs for hybrid SAT-solvers [1.087459729391301]
We argue that there exists a simple trade-off relation between the total runtime and the coherence-time.
We present numerical simulations which suggest additional flexibility in implementing hybrid algorithms with optimal trade-off.
arXiv Detail & Related papers (2024-04-23T17:11:39Z) - Unified framework for efficiently computable quantum circuits [0.0]
Quantum circuits consisting of Clifford and matchgates are two classes of circuits that are known to be efficiently simulatable on a classical computer.
We introduce a unified framework that shows in a transparent way the special structure that allows these circuits can be efficiently simulatable.
arXiv Detail & Related papers (2024-01-16T08:04:28Z) - An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - Efficient Computation of Counterfactual Bounds [44.4263314637532]
We compute exact counterfactual bounds via algorithms for credal nets on a subclass of structural causal models.
We evaluate their accuracy by providing credible intervals on the quality of the approximation.
arXiv Detail & Related papers (2023-07-17T07:59:47Z) - What and How does In-Context Learning Learn? Bayesian Model Averaging,
Parameterization, and Generalization [111.55277952086155]
We study In-Context Learning (ICL) by addressing several open questions.
We show that, without updating the neural network parameters, ICL implicitly implements the Bayesian model averaging algorithm.
We prove that the error of pretrained model is bounded by a sum of an approximation error and a generalization error.
arXiv Detail & Related papers (2023-05-30T21:23:47Z) - Knowledge Base Question Answering by Case-based Reasoning over Subgraphs [81.22050011503933]
We show that our model answers queries requiring complex reasoning patterns more effectively than existing KG completion algorithms.
The proposed model outperforms or performs competitively with state-of-the-art models on several KBQA benchmarks.
arXiv Detail & Related papers (2022-02-22T01:34:35Z) - Sketching as a Tool for Understanding and Accelerating Self-attention
for Long Sequences [52.6022911513076]
Transformer-based models are not efficient in processing long sequences due to the quadratic space and time complexity of the self-attention modules.
We propose Linformer and Informer to reduce the quadratic complexity to linear (modulo logarithmic factors) via low-dimensional projection and row selection.
Based on the theoretical analysis, we propose Skeinformer to accelerate self-attention and further improve the accuracy of matrix approximation to self-attention.
arXiv Detail & Related papers (2021-12-10T06:58:05Z) - Approximation Theory of Convolutional Architectures for Time Series
Modelling [15.42770933459534]
We study the approximation properties of convolutional architectures applied to time series modelling.
Recent results reveal an intricate connection between approximation efficiency and memory structures in the data generation process.
arXiv Detail & Related papers (2021-07-20T09:19:26Z) - Accelerating Feedforward Computation via Parallel Nonlinear Equation
Solving [106.63673243937492]
Feedforward computation, such as evaluating a neural network or sampling from an autoregressive model, is ubiquitous in machine learning.
We frame the task of feedforward computation as solving a system of nonlinear equations. We then propose to find the solution using a Jacobi or Gauss-Seidel fixed-point method, as well as hybrid methods of both.
Our method is guaranteed to give exactly the same values as the original feedforward computation with a reduced (or equal) number of parallelizable iterations, and hence reduced time given sufficient parallel computing power.
arXiv Detail & Related papers (2020-02-10T10:11:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.