C-HDNet: A Fast Hyperdimensional Computing Based Method for Causal Effect Estimation from Networked Observational Data
- URL: http://arxiv.org/abs/2501.16562v1
- Date: Mon, 27 Jan 2025 23:12:18 GMT
- Title: C-HDNet: A Fast Hyperdimensional Computing Based Method for Causal Effect Estimation from Networked Observational Data
- Authors: Abhishek Dalvi, Neil Ashtekar, Vasant Honavar,
- Abstract summary: We consider the problem of estimating causal effects from observational data in the presence of network confounding.
We propose a novel matching technique which leverages hyperdimensional computing to model network information and improve predictive performance.
- Score: 2.048226951354646
- License:
- Abstract: We consider the problem of estimating causal effects from observational data in the presence of network confounding. In this context, an individual's treatment assignment and outcomes may be affected by their neighbors within the network. We propose a novel matching technique which leverages hyperdimensional computing to model network information and improve predictive performance. We present results of extensive experiments which show that the proposed method outperforms or is competitive with the state-of-the-art methods for causal effect estimation from network data, including advanced computationally demanding deep learning methods. Further, our technique benefits from simplicity and speed, with roughly an order of magnitude lower runtime compared to state-of-the-art methods, while offering similar causal effect estimation error rates.
Related papers
- Doubly Robust Causal Effect Estimation under Networked Interference via Targeted Learning [24.63284452991301]
We propose a doubly robust causal effect estimator under networked interference.
Specifically, we generalize the targeted learning technique into the networked interference setting.
We devise an end-to-end causal effect estimator by transforming the identified theoretical condition into a targeted loss.
arXiv Detail & Related papers (2024-05-06T10:49:51Z) - Neural Networks with Causal Graph Constraints: A New Approach for Treatment Effects Estimation [0.951494089949975]
We present a new model, NN-CGC, that considers additional information from the causal graph.
We show that our method is robust to imperfect causal graphs and that using partial causal information is preferable to ignoring it.
arXiv Detail & Related papers (2024-04-18T14:57:17Z) - Simple Ingredients for Offline Reinforcement Learning [86.1988266277766]
offline reinforcement learning algorithms have proven effective on datasets highly connected to the target downstream task.
We show that existing methods struggle with diverse data: their performance considerably deteriorates as data collected for related but different tasks is simply added to the offline buffer.
We show that scale, more than algorithmic considerations, is the key factor influencing performance.
arXiv Detail & Related papers (2024-03-19T18:57:53Z) - Graph Machine Learning based Doubly Robust Estimator for Network Causal Effects [17.44202934049009]
We propose a novel methodology that combines graph machine learning approaches with the double machine learning framework.
We demonstrate our method is accurate, robust, and scalable via an extensive simulation study.
arXiv Detail & Related papers (2024-03-17T20:23:42Z) - Benchmarking Bayesian Causal Discovery Methods for Downstream Treatment
Effect Estimation [137.3520153445413]
A notable gap exists in the evaluation of causal discovery methods, where insufficient emphasis is placed on downstream inference.
We evaluate seven established baseline causal discovery methods including a newly proposed method based on GFlowNets.
The results of our study demonstrate that some of the algorithms studied are able to effectively capture a wide range of useful and diverse ATE modes.
arXiv Detail & Related papers (2023-07-11T02:58:10Z) - Causal Inference from Small High-dimensional Datasets [7.1894784995284144]
Causal-Batle is a methodology to estimate treatment effects in small high-dimensional datasets.
We adopt an approach that brings transfer learning techniques into causal inference.
arXiv Detail & Related papers (2022-05-19T02:04:01Z) - Scalable computation of prediction intervals for neural networks via
matrix sketching [79.44177623781043]
Existing algorithms for uncertainty estimation require modifying the model architecture and training procedure.
This work proposes a new algorithm that can be applied to a given trained neural network and produces approximate prediction intervals.
arXiv Detail & Related papers (2022-05-06T13:18:31Z) - Convolutional generative adversarial imputation networks for
spatio-temporal missing data in storm surge simulations [86.5302150777089]
Generative Adversarial Imputation Nets (GANs) and GAN-based techniques have attracted attention as unsupervised machine learning methods.
We name our proposed method as Con Conval Generative Adversarial Imputation Nets (Conv-GAIN)
arXiv Detail & Related papers (2021-11-03T03:50:48Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z) - Graph Infomax Adversarial Learning for Treatment Effect Estimation with
Networked Observational Data [9.08763820415824]
We propose a Graph Infomax Adrial Learning (GIAL) model for treatment effect estimation, which makes full use of the network structure to capture more information.
We evaluate the performance of our GIAL model on two benchmark datasets, and the results demonstrate superiority over the state-of-the-art methods.
arXiv Detail & Related papers (2021-06-05T12:30:14Z) - Understanding the Effects of Data Parallelism and Sparsity on Neural
Network Training [126.49572353148262]
We study two factors in neural network training: data parallelism and sparsity.
Despite their promising benefits, understanding of their effects on neural network training remains elusive.
arXiv Detail & Related papers (2020-03-25T10:49:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.