Personalized pathology test for Cardio-vascular disease: Approximate
Bayesian computation with discriminative summary statistics learning
- URL: http://arxiv.org/abs/2010.06465v2
- Date: Wed, 9 Feb 2022 22:05:37 GMT
- Title: Personalized pathology test for Cardio-vascular disease: Approximate
Bayesian computation with discriminative summary statistics learning
- Authors: Ritabrata Dutta, Karim Zouaoui-Boudjeltia, Christos Kotsalos,
Alexandre Rousseau, Daniel Ribeiro de Sousa, Jean-Marc Desmet, Alain Van
Meerhaeghe, Antonietta Mira, Bastien Chopard
- Abstract summary: We propose a platelet deposition model and an inferential scheme to estimate the biologically meaningful parameters using approximate computation.
This work opens up an unprecedented opportunity of personalized pathology test for CVD detection and medical treatment.
- Score: 48.7576911714538
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cardio/cerebrovascular diseases (CVD) have become one of the major health
issue in our societies. But recent studies show that the present pathology
tests to detect CVD are ineffectual as they do not consider different stages of
platelet activation or the molecular dynamics involved in platelet interactions
and are incapable to consider inter-individual variability. Here we propose a
stochastic platelet deposition model and an inferential scheme to estimate the
biologically meaningful model parameters using approximate Bayesian computation
with a summary statistic that maximally discriminates between different types
of patients. Inferred parameters from data collected on healthy volunteers and
different patient types help us to identify specific biological parameters and
hence biological reasoning behind the dysfunction for each type of patients.
This work opens up an unprecedented opportunity of personalized pathology test
for CVD detection and medical treatment.
Related papers
- Deep State-Space Generative Model For Correlated Time-to-Event Predictions [54.3637600983898]
We propose a deep latent state-space generative model to capture the interactions among different types of correlated clinical events.
Our method also uncovers meaningful insights about the latent correlations among mortality and different types of organ failures.
arXiv Detail & Related papers (2024-07-28T02:42:36Z) - Can Machine Learning Assist in Diagnosis of Primary Immune Thrombocytopenia? A feasibility study [12.4123972735841]
Primary Immune thrombocytopenia (ITP) is a rare autoimmune disease characterised by immune-mediated destruction of peripheral blood platelets in patients.
There is no established test to confirm the disease and no biomarker with which one can predict the response to treatment and outcome.
We conduct a feasibility study to check if machine learning can be applied effectively for diagnosis of ITP using routine blood tests and demographic data in a non-acute outpatient setting.
arXiv Detail & Related papers (2024-05-31T01:04:46Z) - Simulation-based Inference for Cardiovascular Models [57.92535897767929]
We use simulation-based inference to solve the inverse problem of mapping waveforms back to plausible physiological parameters.
We perform an in-silico uncertainty analysis of five biomarkers of clinical interest.
We study the gap between in-vivo and in-silico with the MIMIC-III waveform database.
arXiv Detail & Related papers (2023-07-26T02:34:57Z) - Mining Themes in Clinical Notes to Identify Phenotypes and to Predict
Length of Stay in Patients admitted with Heart Failure [3.350712823657887]
Heart failure is a syndrome which occurs when the heart is not able to pump blood and oxygen to support other organs in the body.
Identifying the underlying themes in the diagnostic codes and procedure reports of patients admitted for heart failure could reveal the clinical phenotypes associated with heart failure.
arXiv Detail & Related papers (2023-05-30T19:30:40Z) - ExBEHRT: Extended Transformer for Electronic Health Records to Predict
Disease Subtypes & Progressions [0.0]
We introduce ExBEHRT, an extended version of BEHRT (BERT applied to electronic health records)
We extend the feature space to several multimodal records, namely demographics, clinical characteristics, vital signs, smoking status, diagnoses, procedures, medications, and laboratory tests.
We show that additional features significantly improve model performance for various downstream tasks in different diseases.
arXiv Detail & Related papers (2023-03-22T08:03:27Z) - Adversarial Sample Enhanced Domain Adaptation: A Case Study on
Predictive Modeling with Electronic Health Records [57.75125067744978]
We propose a data augmentation method to facilitate domain adaptation.
adversarially generated samples are used during domain adaptation.
Results confirm the effectiveness of our method and the generality on different tasks.
arXiv Detail & Related papers (2021-01-13T03:20:20Z) - Mixture Model Framework for Traumatic Brain Injury Prognosis Using
Heterogeneous Clinical and Outcome Data [3.7363119896212478]
We develop a method for modeling large heterogeneous data types relevant to TBI.
The model is trained on a dataset encompassing a variety of data types, including demographics, blood-based biomarkers, and imaging findings.
It is used to stratify patients into distinct groups in an unsupervised learning setting.
arXiv Detail & Related papers (2020-12-22T19:31:03Z) - Cardiac Cohort Classification based on Morphologic and Hemodynamic
Parameters extracted from 4D PC-MRI Data [6.805476759441964]
We investigate the potential of morphological and hemodynamic characteristics, extracted from measured blood flow data in the aorta, for the classification of heart-healthy volunteers and patients with bicuspid aortic valve (BAV)
In our experiments, we use several feature selection methods and classification algorithms to train separate models for the healthy subgroups and BAV patients.
arXiv Detail & Related papers (2020-10-12T11:36:04Z) - Trajectories, bifurcations and pseudotime in large clinical datasets:
applications to myocardial infarction and diabetes data [94.37521840642141]
We suggest a semi-supervised methodology for the analysis of large clinical datasets, characterized by mixed data types and missing values.
The methodology is based on application of elastic principal graphs which can address simultaneously the tasks of dimensionality reduction, data visualization, clustering, feature selection and quantifying the geodesic distances (pseudotime) in partially ordered sequences of observations.
arXiv Detail & Related papers (2020-07-07T21:04:55Z) - Learning Dynamic and Personalized Comorbidity Networks from Event Data
using Deep Diffusion Processes [102.02672176520382]
Comorbid diseases co-occur and progress via complex temporal patterns that vary among individuals.
In electronic health records we can observe the different diseases a patient has, but can only infer the temporal relationship between each co-morbid condition.
We develop deep diffusion processes to model "dynamic comorbidity networks"
arXiv Detail & Related papers (2020-01-08T15:47:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.