From Risk Prediction to Risk Factors Interpretation. Comparison of
Neural Networks and Classical Statistics for Dementia Prediction
- URL: http://arxiv.org/abs/2301.06995v1
- Date: Tue, 17 Jan 2023 16:26:17 GMT
- Title: From Risk Prediction to Risk Factors Interpretation. Comparison of
Neural Networks and Classical Statistics for Dementia Prediction
- Authors: C. Huber
- Abstract summary: It is proposed to investigate the onset of a disease D, based on several risk factors.
Two classes of techniques are available: classical statistics and artificial intelligence.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It is proposed to investigate the onset of a disease D, based on several risk
factors., with a specific interest in Alzheimer occurrence. For that purpose,
two classes of techniques are available, whose properties are quite different
in terms of interpretation, which is the focus of this paper: classical
statistics based on probabilistic models and artificial intelligence (mainly
neural networks) based on optimization algorithms. Both methods are good at
prediction, with a preference for neural networks when the dimension of the
potential predictors is high. But the advantage of the classical statistics is
cognitive : the role of each factor is generally summarized in the value of a
coefficient which is highly positive for a harmful factor, close to 0 for an
irrelevant one, and highly negative for a beneficial one.
Related papers
- Neural Fine-Gray: Monotonic neural networks for competing risks [0.0]
Time-to-event modelling, known as survival analysis, differs from standard regression as it addresses censoring in patients who do not experience the event of interest.
This paper leverages constrained monotonic neural networks to model each competing survival distribution.
The effectiveness of the solution is demonstrated on one synthetic and three medical datasets.
arXiv Detail & Related papers (2023-05-11T10:27:59Z) - Disentangling the Link Between Image Statistics and Human Perception [47.912998421927085]
In the 1950s, Barlow and Attneave hypothesised a link between biological vision and information maximisation.
We show how probability-related factors can be combined to predict human perception via sensitivity of state-of-the-art subjective image quality metrics.
arXiv Detail & Related papers (2023-03-17T10:38:27Z) - Uncertainty Modeling for Out-of-Distribution Generalization [56.957731893992495]
We argue that the feature statistics can be properly manipulated to improve the generalization ability of deep learning models.
Common methods often consider the feature statistics as deterministic values measured from the learned features.
We improve the network generalization ability by modeling the uncertainty of domain shifts with synthesized feature statistics during training.
arXiv Detail & Related papers (2022-02-08T16:09:12Z) - Improving Prediction of Cognitive Performance using Deep Neural Networks
in Sparse Data [2.867517731896504]
We used data from an observational, cohort study, Midlife in the United States (MIDUS) to model executive function and episodic memory measures.
Deep neural network (DNN) models consistently ranked highest in all of the cognitive performance prediction tasks.
arXiv Detail & Related papers (2021-12-28T22:23:08Z) - Two steps to risk sensitivity [4.974890682815778]
conditional value-at-risk (CVaR) is a risk measure for modeling human and animal planning.
We adopt a conventional distributional approach to CVaR in a sequential setting and reanalyze the choices of human decision-makers.
We then consider a further critical property of risk sensitivity, namely time consistency, showing alternatives to this form of CVaR.
arXiv Detail & Related papers (2021-11-12T16:27:47Z) - A New Approach for Interpretability and Reliability in Clinical Risk
Prediction: Acute Coronary Syndrome Scenario [0.33927193323747895]
We intend to create a new risk assessment methodology that combines the best characteristics of both risk score and machine learning models.
The proposed approach achieved testing results identical to the standard LR, but offers superior interpretability and personalization.
The reliability estimation of individual predictions presented a great correlation with the misclassifications rate.
arXiv Detail & Related papers (2021-10-15T19:33:46Z) - The Causal Neural Connection: Expressiveness, Learnability, and
Inference [125.57815987218756]
An object called structural causal model (SCM) represents a collection of mechanisms and sources of random variation of the system under investigation.
In this paper, we show that the causal hierarchy theorem (Thm. 1, Bareinboim et al., 2020) still holds for neural models.
We introduce a special type of SCM called a neural causal model (NCM), and formalize a new type of inductive bias to encode structural constraints necessary for performing causal inferences.
arXiv Detail & Related papers (2021-07-02T01:55:18Z) - Bootstrapping Your Own Positive Sample: Contrastive Learning With
Electronic Health Record Data [62.29031007761901]
This paper proposes a novel contrastive regularized clinical classification model.
We introduce two unique positive sampling strategies specifically tailored for EHR data.
Our framework yields highly competitive experimental results in predicting the mortality risk on real-world COVID-19 EHR data.
arXiv Detail & Related papers (2021-04-07T06:02:04Z) - Variational Bayes Neural Network: Posterior Consistency, Classification
Accuracy and Computational Challenges [0.3867363075280544]
This paper develops a variational Bayesian neural network estimation methodology and related statistical theory.
The development is motivated by an important biomedical engineering application, namely building predictive tools for the transition from mild cognitive impairment to Alzheimer's disease.
arXiv Detail & Related papers (2020-11-19T00:11:27Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z) - Vulnerability Under Adversarial Machine Learning: Bias or Variance? [77.30759061082085]
We investigate the effect of adversarial machine learning on the bias and variance of a trained deep neural network.
Our analysis sheds light on why the deep neural networks have poor performance under adversarial perturbation.
We introduce a new adversarial machine learning algorithm with lower computational complexity than well-known adversarial machine learning strategies.
arXiv Detail & Related papers (2020-08-01T00:58:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.