Neural Score Matching for High-Dimensional Causal Inference
- URL: http://arxiv.org/abs/2203.00554v1
- Date: Tue, 1 Mar 2022 15:36:12 GMT
- Title: Neural Score Matching for High-Dimensional Causal Inference
- Authors: Oscar Clivio, Fabian Falck, Brieuc Lehmann, George Deligiannidis,
Chris Holmes
- Abstract summary: We develop theoretical results which motivate the use of neural networks to obtain non-trivial balancing scores of a chosen level of coarseness.
We show that our method is competitive against other matching approaches on semi-synthetic high-dimensional datasets.
- Score: 5.696039065328919
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Traditional methods for matching in causal inference are impractical for
high-dimensional datasets. They suffer from the curse of dimensionality: exact
matching and coarsened exact matching find exponentially fewer matches as the
input dimension grows, and propensity score matching may match highly unrelated
units together. To overcome this problem, we develop theoretical results which
motivate the use of neural networks to obtain non-trivial, multivariate
balancing scores of a chosen level of coarseness, in contrast to the classical,
scalar propensity score. We leverage these balancing scores to perform matching
for high-dimensional causal inference and call this procedure neural score
matching. We show that our method is competitive against other matching
approaches on semi-synthetic high-dimensional datasets, both in terms of
treatment effect estimation and reducing imbalance.
Related papers
- Partial Soft-Matching Distance for Neural Representational Comparison with Partial Unit Correspondence [6.914720821302567]
We extend the soft-matching distance to a partial optimal transport setting that allows some neurons to remain unmatched.<n>It preserves correct matches under outliers and reliably selects the correct model in noise-corrupted identification tasks.<n>It achieves higher alignment precision across brain areas than standard soft-matching, which is forced to match all units regardless of quality.
arXiv Detail & Related papers (2026-02-22T20:31:35Z) - Implicit score matching meets denoising score matching: improved rates of convergence and log-density Hessian estimation [5.773269033551628]
We study the problem of estimating the score function using both implicit score matching and denoising score matching.<n>We prove that implicit score matching is able not only to adapt to the intrinsic dimension, but also to achieve the same rates of convergence as denoising score matching.
arXiv Detail & Related papers (2025-12-30T17:39:48Z) - Causal Effect Estimation Using Random Hyperplane Tessellations [2.048226951354646]
Matching is one of the simplest approaches for estimating causal effects from observational data.
We propose a simple, fast, yet highly effective approach to matching using Random Hyperplane Tessellations (RHPT)
We report results of extensive experiments showing that matching usingRHT outperforms traditional matching estimation and is competitive with state-of-the-art deep learning methods for causal effect.
arXiv Detail & Related papers (2024-04-16T20:53:58Z) - Semisupervised score based matching algorithm to evaluate the effect of public health interventions [3.221788913179251]
In one-to-one matching algorithms, a large number of "pairs" to be matched could mean both the information from a large sample and a large number of tasks.
We propose a novel one-to-one matching algorithm based on a quadratic score function $S_beta(x_i,x_j)= betaT (x_i-x_j)(x_i-x_j)T beta$.
arXiv Detail & Related papers (2024-03-19T02:24:16Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - Partially factorized variational inference for high-dimensional mixed models [0.0]
Variational inference is a popular way to perform such computations, especially in the Bayesian context.
We show that standard mean-field variational inference dramatically underestimates posterior uncertainty in high-dimensions.
We then show how appropriately relaxing the mean-field assumption leads to methods whose uncertainty quantification does not deteriorate in high-dimensions.
arXiv Detail & Related papers (2023-12-20T16:12:37Z) - Sample Complexity Bounds for Score-Matching: Causal Discovery and
Generative Modeling [82.36856860383291]
We demonstrate that accurate estimation of the score function is achievable by training a standard deep ReLU neural network.
We establish bounds on the error rate of recovering causal relationships using the score-matching-based causal discovery method.
arXiv Detail & Related papers (2023-10-27T13:09:56Z) - Exploring new ways: Enforcing representational dissimilarity to learn
new features and reduce error consistency [1.7497479054352052]
We show that highly dissimilar intermediate representations result in less correlated output predictions and slightly lower error consistency.
With this, we shine first light on the connection between intermediate representations and their impact on the output predictions.
arXiv Detail & Related papers (2023-07-05T14:28:46Z) - Nonparametric Probabilistic Regression with Coarse Learners [1.8275108630751844]
We show that we can compute precise conditional densities with minimal assumptions on the shape or form of the density.
We demonstrate this approach on a variety of datasets and show competitive performance, particularly on larger datasets.
arXiv Detail & Related papers (2022-10-28T16:25:26Z) - Deep Probabilistic Graph Matching [72.6690550634166]
We propose a deep learning-based graph matching framework that works for the original QAP without compromising on the matching constraints.
The proposed method is evaluated on three popularly tested benchmarks (Pascal VOC, Willow Object and SPair-71k) and it outperforms all previous state-of-the-arts on all benchmarks.
arXiv Detail & Related papers (2022-01-05T13:37:27Z) - Denoising Score Matching with Random Fourier Features [11.60130641443281]
We derive analytical expression for the Denoising Score matching using the Kernel Exponential Family as a model distribution.
The obtained expression explicitly depends on the noise variance, so the validation loss can be straightforwardly used to tune the noise level.
arXiv Detail & Related papers (2021-01-13T18:02:39Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z) - Almost-Matching-Exactly for Treatment Effect Estimation under Network
Interference [73.23326654892963]
We propose a matching method that recovers direct treatment effects from randomized experiments where units are connected in an observed network.
Our method matches units almost exactly on counts of unique subgraphs within their neighborhood graphs.
arXiv Detail & Related papers (2020-03-02T15:21:20Z) - MALTS: Matching After Learning to Stretch [86.84454964051014]
We learn an interpretable distance metric for matching, which leads to substantially higher quality matches.
Our ability to learn flexible distance metrics leads to matches that are interpretable and useful for the estimation of conditional average treatment effects.
arXiv Detail & Related papers (2018-11-18T22:29:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.