ELMV: an Ensemble-Learning Approach for Analyzing Electrical Health
Records with Significant Missing Values
- URL: http://arxiv.org/abs/2006.14942v2
- Date: Tue, 3 Nov 2020 08:34:59 GMT
- Title: ELMV: an Ensemble-Learning Approach for Analyzing Electrical Health
Records with Significant Missing Values
- Authors: Lucas J. Liu, Hongwei Zhang, Jianzhong Di, Jin Chen
- Abstract summary: We propose a novel Ensemble-Learning for Missing Value (ELMV) framework, which introduces an effective approach to construct multiple subsets of the original EHR data with a much lower missing rate.
ELMV has been evaluated on a real-world healthcare data for critical feature identification as well as a batch of simulation data with different missing rates for outcome prediction.
- Score: 4.9810955364960385
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many real-world Electronic Health Record (EHR) data contains a large
proportion of missing values. Leaving substantial portion of missing
information unaddressed usually causes significant bias, which leads to invalid
conclusion to be drawn. On the other hand, training a machine learning model
with a much smaller nearly-complete subset can drastically impact the
reliability and accuracy of model inference. Data imputation algorithms that
attempt to replace missing data with meaningful values inevitably increase the
variability of effect estimates with increased missingness, making it
unreliable for hypothesis validation. We propose a novel Ensemble-Learning for
Missing Value (ELMV) framework, which introduces an effective approach to
construct multiple subsets of the original EHR data with a much lower missing
rate, as well as mobilizing a dedicated support set for the ensemble learning
in the purpose of reducing the bias caused by substantial missing values. ELMV
has been evaluated on a real-world healthcare data for critical feature
identification as well as a batch of simulation data with different missing
rates for outcome prediction. On both experiments, ELMV clearly outperforms
conventional missing value imputation methods and ensemble learning models.
Related papers
- M$^3$-Impute: Mask-guided Representation Learning for Missing Value Imputation [12.174699459648842]
M$3$-Impute aims to explicitly leverage the missingness information and such correlations with novel masking schemes.
Experiment results show the effectiveness of M$3$-Impute by achieving 20 best and 4 second-best MAE scores on average.
arXiv Detail & Related papers (2024-10-11T13:25:32Z) - Boosting Differentiable Causal Discovery via Adaptive Sample Reweighting [62.23057729112182]
Differentiable score-based causal discovery methods learn a directed acyclic graph from observational data.
We propose a model-agnostic framework to boost causal discovery performance by dynamically learning the adaptive weights for the Reweighted Score function, ReScore.
arXiv Detail & Related papers (2023-03-06T14:49:59Z) - Uncertainty Estimation by Fisher Information-based Evidential Deep
Learning [61.94125052118442]
Uncertainty estimation is a key factor that makes deep learning reliable in practical applications.
We propose a novel method, Fisher Information-based Evidential Deep Learning ($mathcalI$-EDL)
In particular, we introduce Fisher Information Matrix (FIM) to measure the informativeness of evidence carried by each sample, according to which we can dynamically reweight the objective loss terms to make the network more focused on the representation learning of uncertain classes.
arXiv Detail & Related papers (2023-03-03T16:12:59Z) - Conditional expectation with regularization for missing data imputation [19.254291863337347]
Missing data frequently occurs in datasets across various domains, such as medicine, sports, and finance.
We propose a new algorithm named "conditional Distribution-based Imputation of Missing Values with Regularization" (DIMV)
DIMV operates by determining the conditional distribution of a feature that has missing entries, using the information from the fully observed features as a basis.
arXiv Detail & Related papers (2023-02-02T06:59:15Z) - CEDAR: Communication Efficient Distributed Analysis for Regressions [9.50726756006467]
There are growing interests about distributed learning over multiple EHRs databases without sharing patient-level data.
We propose a novel communication efficient method that aggregates the local optimal estimates, by turning the problem into a missing data problem.
We provide theoretical investigation for the properties of the proposed method for statistical inference as well as differential privacy, and evaluate its performance in simulations and real data analyses.
arXiv Detail & Related papers (2022-07-01T09:53:44Z) - RIFLE: Imputation and Robust Inference from Low Order Marginals [10.082738539201804]
We develop a statistical inference framework for regression and classification in the presence of missing data without imputation.
Our framework, RIFLE, estimates low-order moments of the underlying data distribution with corresponding confidence intervals to learn a distributionally robust model.
Our experiments demonstrate that RIFLE outperforms other benchmark algorithms when the percentage of missing values is high and/or when the number of data points is relatively small.
arXiv Detail & Related papers (2021-09-01T23:17:30Z) - Bootstrapping Your Own Positive Sample: Contrastive Learning With
Electronic Health Record Data [62.29031007761901]
This paper proposes a novel contrastive regularized clinical classification model.
We introduce two unique positive sampling strategies specifically tailored for EHR data.
Our framework yields highly competitive experimental results in predicting the mortality risk on real-world COVID-19 EHR data.
arXiv Detail & Related papers (2021-04-07T06:02:04Z) - Accurate and Robust Feature Importance Estimation under Distribution
Shifts [49.58991359544005]
PRoFILE is a novel feature importance estimation method.
We show significant improvements over state-of-the-art approaches, both in terms of fidelity and robustness.
arXiv Detail & Related papers (2020-09-30T05:29:01Z) - Provably Efficient Causal Reinforcement Learning with Confounded
Observational Data [135.64775986546505]
We study how to incorporate the dataset (observational data) collected offline, which is often abundantly available in practice, to improve the sample efficiency in the online setting.
We propose the deconfounded optimistic value iteration (DOVI) algorithm, which incorporates the confounded observational data in a provably efficient manner.
arXiv Detail & Related papers (2020-06-22T14:49:33Z) - An Investigation of Why Overparameterization Exacerbates Spurious
Correlations [98.3066727301239]
We identify two key properties of the training data that drive this behavior.
We show how the inductive bias of models towards "memorizing" fewer examples can cause over parameterization to hurt.
arXiv Detail & Related papers (2020-05-09T01:59:13Z) - Uncertainty-Gated Stochastic Sequential Model for EHR Mortality
Prediction [6.170898159041278]
We present a novel variational recurrent network that estimates the distribution of missing variables, updates hidden states, and predicts the possibility of in-hospital mortality.
It is noteworthy that our model can conduct these procedures in a single stream and learn all network parameters jointly in an end-to-end manner.
arXiv Detail & Related papers (2020-03-02T04:41:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.