Teaching deep learning causal effects improves predictive performance
- URL: http://arxiv.org/abs/2011.05466v1
- Date: Wed, 11 Nov 2020 00:01:14 GMT
- Title: Teaching deep learning causal effects improves predictive performance
- Authors: Jia Li, Xiaowei Jia, Haoyu Yang, Vipin Kumar, Michael Steinbach,
Gyorgy Simon
- Abstract summary: We describe a Causal-Temporal Structure for temporal EHR data; then based on this structure, we estimate sequential ITE along the timeline.
We propose a knowledge-guided neural network methodology to incorporate estimated ITE.
- Score: 18.861884489332894
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Causal inference is a powerful statistical methodology for explanatory
analysis and individualized treatment effect (ITE) estimation, a prominent
causal inference task that has become a fundamental research problem. ITE
estimation, when performed naively, tends to produce biased estimates. To
obtain unbiased estimates, counterfactual information is needed, which is not
directly observable from data. Based on mature domain knowledge, reliable
traditional methods to estimate ITE exist. In recent years, neural networks
have been widely used in clinical studies. Specifically, recurrent neural
networks (RNN) have been applied to temporal Electronic Health Records (EHR)
data analysis. However, RNNs are not guaranteed to automatically discover
causal knowledge, correctly estimate counterfactual information, and thus
correctly estimate the ITE. This lack of correct ITE estimates can hinder the
performance of the model. In this work we study whether RNNs can be guided to
correctly incorporate ITE-related knowledge and whether this improves
predictive performance. Specifically, we first describe a Causal-Temporal
Structure for temporal EHR data; then based on this structure, we estimate
sequential ITE along the timeline, using sequential Propensity Score Matching
(PSM); and finally, we propose a knowledge-guided neural network methodology to
incorporate estimated ITE. We demonstrate on real-world and synthetic data
(where the actual ITEs are known) that the proposed methodology can
significantly improve the prediction performance of RNN.
Related papers
- An Investigation on Machine Learning Predictive Accuracy Improvement and Uncertainty Reduction using VAE-based Data Augmentation [2.517043342442487]
Deep generative learning uses certain ML models to learn the underlying distribution of existing data and generate synthetic samples that resemble the real data.
In this study, our objective is to evaluate the effectiveness of data augmentation using variational autoencoder (VAE)-based deep generative models.
We investigated whether the data augmentation leads to improved accuracy in the predictions of a deep neural network (DNN) model trained using the augmented data.
arXiv Detail & Related papers (2024-10-24T18:15:48Z) - Sparse Deep Learning for Time Series Data: Theory and Applications [9.878774148693575]
Sparse deep learning has become a popular technique for improving the performance of deep neural networks.
This paper studies the theory for sparse deep learning with dependent data.
Our results indicate that the proposed method can consistently identify the autoregressive order for time series data.
arXiv Detail & Related papers (2023-10-05T01:26:13Z) - Instance-based Learning with Prototype Reduction for Real-Time
Proportional Myocontrol: A Randomized User Study Demonstrating
Accuracy-preserving Data Reduction for Prosthetic Embedded Systems [0.0]
This work presents the design, implementation and validation of learning techniques based on the kNN scheme for gesture detection in prosthetic control.
The influence of parameterization and varying proportionality schemes is analyzed, utilizing an eight-channel-sEMG armband.
arXiv Detail & Related papers (2023-08-21T20:15:35Z) - Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - Confidence-Nets: A Step Towards better Prediction Intervals for
regression Neural Networks on small datasets [0.0]
We propose an ensemble method that attempts to estimate the uncertainty of predictions, increase their accuracy and provide an interval for the expected variation.
The proposed method is tested on various datasets, and a significant improvement in the performance of the neural network model is seen.
arXiv Detail & Related papers (2022-10-31T06:38:40Z) - Probabilistic AutoRegressive Neural Networks for Accurate Long-range
Forecasting [6.295157260756792]
We introduce the Probabilistic AutoRegressive Neural Networks (PARNN)
PARNN is capable of handling complex time series data exhibiting non-stationarity, nonlinearity, non-seasonality, long-range dependence, and chaotic patterns.
We evaluate the performance of PARNN against standard statistical, machine learning, and deep learning models, including Transformers, NBeats, and DeepAR.
arXiv Detail & Related papers (2022-04-01T17:57:36Z) - FF-NSL: Feed-Forward Neural-Symbolic Learner [70.978007919101]
This paper introduces a neural-symbolic learning framework, called Feed-Forward Neural-Symbolic Learner (FF-NSL)
FF-NSL integrates state-of-the-art ILP systems based on the Answer Set semantics, with neural networks, in order to learn interpretable hypotheses from labelled unstructured data.
arXiv Detail & Related papers (2021-06-24T15:38:34Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Provably Efficient Causal Reinforcement Learning with Confounded
Observational Data [135.64775986546505]
We study how to incorporate the dataset (observational data) collected offline, which is often abundantly available in practice, to improve the sample efficiency in the online setting.
We propose the deconfounded optimistic value iteration (DOVI) algorithm, which incorporates the confounded observational data in a provably efficient manner.
arXiv Detail & Related papers (2020-06-22T14:49:33Z) - Frequentist Uncertainty in Recurrent Neural Networks via Blockwise
Influence Functions [121.10450359856242]
Recurrent neural networks (RNNs) are instrumental in modelling sequential and time-series data.
Existing approaches for uncertainty quantification in RNNs are based predominantly on Bayesian methods.
We develop a frequentist alternative that: (a) does not interfere with model training or compromise its accuracy, (b) applies to any RNN architecture, and (c) provides theoretical coverage guarantees on the estimated uncertainty intervals.
arXiv Detail & Related papers (2020-06-20T22:45:32Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.