Differentiable Causal Discovery from Interventional Data
- URL: http://arxiv.org/abs/2007.01754v2
- Date: Tue, 3 Nov 2020 20:43:10 GMT
- Title: Differentiable Causal Discovery from Interventional Data
- Authors: Philippe Brouillard, S\'ebastien Lachapelle, Alexandre Lacoste, Simon
Lacoste-Julien, Alexandre Drouin
- Abstract summary: We propose a theoretically-grounded method based on neural networks that can leverage interventional data.
We show that our approach compares favorably to the state of the art in a variety of settings.
- Score: 141.41931444927184
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning a causal directed acyclic graph from data is a challenging task that
involves solving a combinatorial problem for which the solution is not always
identifiable. A new line of work reformulates this problem as a continuous
constrained optimization one, which is solved via the augmented Lagrangian
method. However, most methods based on this idea do not make use of
interventional data, which can significantly alleviate identifiability issues.
This work constitutes a new step in this direction by proposing a
theoretically-grounded method based on neural networks that can leverage
interventional data. We illustrate the flexibility of the
continuous-constrained framework by taking advantage of expressive neural
architectures such as normalizing flows. We show that our approach compares
favorably to the state of the art in a variety of settings, including perfect
and imperfect interventions for which the targeted nodes may even be unknown.
Related papers
- ExDBN: Exact learning of Dynamic Bayesian Networks [2.2499166814992435]
We propose a score-based learning approach for causal learning from data.
We show that the proposed approach turns out to produce excellent results when applied to small and medium-sized synthetic instances of up to 25 time-series.
Two interesting applications in bio-science and finance, to which the method is directly applied, further stress the opportunities in developing highly accurate, globally convergent solvers.
arXiv Detail & Related papers (2024-10-21T15:27:18Z) - Deep Neural Network for Constraint Acquisition through Tailored Loss
Function [0.0]
The significance of learning constraints from data is underscored by its potential applications in real-world problem-solving.
This work introduces a novel approach grounded in Deep Neural Network (DNN) based on Symbolic Regression.
arXiv Detail & Related papers (2024-03-04T13:47:33Z) - Curriculum-Enhanced Residual Soft An-Isotropic Normalization for
Over-smoothness in Deep GNNs [4.468525856678543]
We propose a soft graph normalization method to preserve the diversities of node embeddings and prevent indiscrimination due to possible over-closeness.
We also propose a novel label-smoothing-based learning framework to enhance the optimization of deep GNNs.
arXiv Detail & Related papers (2023-12-13T15:42:14Z) - Learning Discriminative Shrinkage Deep Networks for Image Deconvolution [122.79108159874426]
We propose an effective non-blind deconvolution approach by learning discriminative shrinkage functions to implicitly model these terms.
Experimental results show that the proposed method performs favorably against the state-of-the-art ones in terms of efficiency and accuracy.
arXiv Detail & Related papers (2021-11-27T12:12:57Z) - Learning with Algorithmic Supervision via Continuous Relaxations [19.437400671428737]
We propose an approach that allows to integrate algorithms into end-to-end trainable neural network architectures.
To obtain meaningful gradients, each relevant variable is perturbed via logistic distributions.
We evaluate the proposed continuous relaxation model on four challenging tasks and show that it can keep up with relaxations specifically designed for each individual task.
arXiv Detail & Related papers (2021-10-11T23:52:42Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z) - On the Treatment of Optimization Problems with L1 Penalty Terms via
Multiobjective Continuation [0.0]
We present a novel algorithm that allows us to gain detailed insight into the effects of sparsity in linear and nonlinear optimization.
Our method can be seen as a generalization of well-known homotopy methods for linear regression problems to the nonlinear case.
arXiv Detail & Related papers (2020-12-14T13:00:50Z) - Deep Magnification-Flexible Upsampling over 3D Point Clouds [103.09504572409449]
We propose a novel end-to-end learning-based framework to generate dense point clouds.
We first formulate the problem explicitly, which boils down to determining the weights and high-order approximation errors.
Then, we design a lightweight neural network to adaptively learn unified and sorted weights as well as the high-order refinements.
arXiv Detail & Related papers (2020-11-25T14:00:18Z) - Semi-Supervised Learning with Meta-Gradient [123.26748223837802]
We propose a simple yet effective meta-learning algorithm in semi-supervised learning.
We find that the proposed algorithm performs favorably against state-of-the-art methods.
arXiv Detail & Related papers (2020-07-08T08:48:56Z) - Towards an Efficient and General Framework of Robust Training for Graph
Neural Networks [96.93500886136532]
Graph Neural Networks (GNNs) have made significant advances on several fundamental inference tasks.
Despite GNNs' impressive performance, it has been observed that carefully crafted perturbations on graph structures lead them to make wrong predictions.
We propose a general framework which leverages the greedy search algorithms and zeroth-order methods to obtain robust GNNs.
arXiv Detail & Related papers (2020-02-25T15:17:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.