Meta Learning for Causal Direction
- URL: http://arxiv.org/abs/2007.02809v2
- Date: Mon, 22 Feb 2021 01:39:02 GMT
- Title: Meta Learning for Causal Direction
- Authors: Jean-Francois Ton, Dino Sejdinovic, Kenji Fukumizu
- Abstract summary: We introduce a novel generative model that allows distinguishing cause and effect in the small data setting.
We demonstrate our method on various synthetic as well as real-world data and show that it is able to maintain high accuracy in detecting directions across varying dataset sizes.
- Score: 29.00522306460408
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The inaccessibility of controlled randomized trials due to inherent
constraints in many fields of science has been a fundamental issue in causal
inference. In this paper, we focus on distinguishing the cause from effect in
the bivariate setting under limited observational data. Based on recent
developments in meta learning as well as in causal inference, we introduce a
novel generative model that allows distinguishing cause and effect in the small
data setting. Using a learnt task variable that contains distributional
information of each dataset, we propose an end-to-end algorithm that makes use
of similar training datasets at test time. We demonstrate our method on various
synthetic as well as real-world data and show that it is able to maintain high
accuracy in detecting directions across varying dataset sizes.
Related papers
- What is different between these datasets? [23.271594219577185]
Two comparable datasets in the same domain may have different distributions.
We propose a suite of interpretable methods (toolbox) for comparing two datasets.
Our methods not only outperform comparable and related approaches in terms of explanation quality and correctness, but also provide actionable, complementary insights to understand and mitigate dataset differences effectively.
arXiv Detail & Related papers (2024-03-08T19:52:39Z) - Causal disentanglement of multimodal data [1.589226862328831]
We introduce a causal representation learning algorithm (causalPIMA) that can use multimodal data and known physics to discover important features with causal relationships.
Our results demonstrate the capability of learning an interpretable causal structure while simultaneously discovering key features in a fully unsupervised setting.
arXiv Detail & Related papers (2023-10-27T20:30:11Z) - Learning to Bound Counterfactual Inference in Structural Causal Models
from Observational and Randomised Data [64.96984404868411]
We derive a likelihood characterisation for the overall data that leads us to extend a previous EM-based algorithm.
The new algorithm learns to approximate the (unidentifiability) region of model parameters from such mixed data sources.
It delivers interval approximations to counterfactual results, which collapse to points in the identifiable case.
arXiv Detail & Related papers (2022-12-06T12:42:11Z) - Multiple Instance Learning for Detecting Anomalies over Sequential
Real-World Datasets [2.427831679672374]
Multiple Instance Learning (MIL) has been shown effective on problems with incomplete knowledge of labels in the training dataset.
We propose an MIL-based formulation and various algorithmic instantiations of this framework based on different design decisions.
The framework generalizes well over diverse datasets resulting from different real-world application domains.
arXiv Detail & Related papers (2022-10-04T16:02:09Z) - Equivariance Allows Handling Multiple Nuisance Variables When Analyzing
Pooled Neuroimaging Datasets [53.34152466646884]
In this paper, we show how bringing recent results on equivariant representation learning instantiated on structured spaces together with simple use of classical results on causal inference provides an effective practical solution.
We demonstrate how our model allows dealing with more than one nuisance variable under some assumptions and can enable analysis of pooled scientific datasets in scenarios that would otherwise entail removing a large portion of the samples.
arXiv Detail & Related papers (2022-03-29T04:54:06Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z) - Multi-Source Causal Inference Using Control Variates [81.57072928775509]
We propose a general algorithm to estimate causal effects from emphmultiple data sources.
We show theoretically that this reduces the variance of the ATE estimate.
We apply this framework to inference from observational data under an outcome selection bias.
arXiv Detail & Related papers (2021-03-30T21:20:51Z) - Interpretable Multi-dataset Evaluation for Named Entity Recognition [110.64368106131062]
We present a general methodology for interpretable evaluation for the named entity recognition (NER) task.
The proposed evaluation method enables us to interpret the differences in models and datasets, as well as the interplay between them.
By making our analysis tool available, we make it easy for future researchers to run similar analyses and drive progress in this area.
arXiv Detail & Related papers (2020-11-13T10:53:27Z) - Overcoming Conflicting Data when Updating a Neural Semantic Parser [5.471925005642665]
We show how to use a small amount of new data to update a task-oriented semantic parsing model when the desired output for some examples has changed.
When making updates in this way, one potential problem that arises is the presence of conflicting data.
We show that the presence of conflicting data greatly hinders learning of an update, then explore several methods to mitigate its effect.
arXiv Detail & Related papers (2020-10-23T21:19:03Z) - Graph Embedding with Data Uncertainty [113.39838145450007]
spectral-based subspace learning is a common data preprocessing step in many machine learning pipelines.
Most subspace learning methods do not take into consideration possible measurement inaccuracies or artifacts that can lead to data with high uncertainty.
arXiv Detail & Related papers (2020-09-01T15:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.