Estimating the probabilities of causation via deep monotonic twin
networks
- URL: http://arxiv.org/abs/2109.01904v2
- Date: Tue, 7 Sep 2021 08:19:15 GMT
- Title: Estimating the probabilities of causation via deep monotonic twin
networks
- Authors: Athanasios Vlontzos, Bernhard Kainz, Ciaran M. Gilligan-Lee
- Abstract summary: We show how to implement twin network counterfactual inference with deep learning to estimate counterfactual queries.
We show how to enforce known identifiability constraints during training, ensuring the answer to each counterfactual query is uniquely determined.
- Score: 3.5953798597797673
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There has been much recent work using machine learning to answer causal
queries. Most focus on interventional queries, such as the conditional average
treatment effect. However, as noted by Pearl, interventional queries only form
part of a larger hierarchy of causal queries, with counterfactuals sitting at
the top. Despite this, our community has not fully succeeded in adapting
machine learning tools to answer counterfactual queries. This work addresses
this challenge by showing how to implement twin network counterfactual
inference -- an alternative to abduction, action, & prediction counterfactual
inference -- with deep learning to estimate counterfactual queries. We show how
the graphical nature of twin networks makes them particularly amenable to deep
learning, yielding simple neural network architectures that, when trained, are
capable of counterfactual inference. Importantly, we show how to enforce known
identifiability constraints during training, ensuring the answer to each
counterfactual query is uniquely determined. We demonstrate our approach by
using it to accurately estimate the probabilities of causation -- important
counterfactual queries that quantify the degree to which one event was a
necessary or sufficient cause of another -- on both synthetic and real data.
Related papers
- Query2Triple: Unified Query Encoding for Answering Diverse Complex
Queries over Knowledge Graphs [29.863085746761556]
We propose Query to Triple (Q2T), a novel approach that decouples the training for simple and complex queries.
Our proposed Q2T is not only efficient to train, but also modular, thus easily adaptable to various neural link predictors.
arXiv Detail & Related papers (2023-10-17T13:13:30Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - Rethinking Complex Queries on Knowledge Graphs with Neural Link Predictors [58.340159346749964]
We propose a new neural-symbolic method to support end-to-end learning using complex queries with provable reasoning capability.
We develop a new dataset containing ten new types of queries with features that have never been considered.
Our method outperforms previous methods significantly in the new dataset and also surpasses previous methods in the existing dataset at the same time.
arXiv Detail & Related papers (2023-04-14T11:35:35Z) - Neural-Symbolic Entangled Framework for Complex Query Answering [22.663509971491138]
We propose a Neural and Entangled framework (ENeSy) for complex query answering.
It enables the neural and symbolic reasoning to enhance each other to alleviate the cascading error and KG incompleteness.
ENeSy achieves the SOTA performance on several benchmarks, especially in the setting of the training model only with the link prediction task.
arXiv Detail & Related papers (2022-09-19T06:07:10Z) - ReAct: Temporal Action Detection with Relational Queries [84.76646044604055]
This work aims at advancing temporal action detection (TAD) using an encoder-decoder framework with action queries.
We first propose a relational attention mechanism in the decoder, which guides the attention among queries based on their relations.
Lastly, we propose to predict the localization quality of each action query at inference in order to distinguish high-quality queries.
arXiv Detail & Related papers (2022-07-14T17:46:37Z) - Nested Counterfactual Identification from Arbitrary Surrogate
Experiments [95.48089725859298]
We study the identification of nested counterfactuals from an arbitrary combination of observations and experiments.
Specifically, we prove the counterfactual unnesting theorem (CUT), which allows one to map arbitrary nested counterfactuals to unnested ones.
arXiv Detail & Related papers (2021-07-07T12:51:04Z) - Logically Consistent Loss for Visual Question Answering [66.83963844316561]
The current advancement in neural-network based Visual Question Answering (VQA) cannot ensure such consistency due to identically distribution (i.i.d.) assumption.
We propose a new model-agnostic logic constraint to tackle this issue by formulating a logically consistent loss in the multi-task learning framework.
Experiments confirm that the proposed loss formulae and introduction of hybrid-batch leads to more consistency as well as better performance.
arXiv Detail & Related papers (2020-11-19T20:31:05Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.