A Unified Model for Longitudinal Multi-Modal Multi-View Prediction with Missingness
- URL: http://arxiv.org/abs/2403.12211v2
- Date: Fri, 22 Mar 2024 00:17:11 GMT
- Title: A Unified Model for Longitudinal Multi-Modal Multi-View Prediction with Missingness
- Authors: Boqi Chen, Junier Oliva, Marc Niethammer,
- Abstract summary: We introduce a unified model for longitudinal multi-modal multi-view prediction with missingness.
Our method allows as many timepoints as desired for input, and aims to leverage all available data, regardless of their availability.
We conduct extensive experiments on the knee osteoarthritis dataset from the Osteoarthritis Initiative for pain and Kellgren-Lawrence grade prediction at a future timepoint.
- Score: 25.95298616599799
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Medical records often consist of different modalities, such as images, text, and tabular information. Integrating all modalities offers a holistic view of a patient's condition, while analyzing them longitudinally provides a better understanding of disease progression. However, real-world longitudinal medical records present challenges: 1) patients may lack some or all of the data for a specific timepoint, and 2) certain modalities or views might be absent for all patients during a particular period. In this work, we introduce a unified model for longitudinal multi-modal multi-view prediction with missingness. Our method allows as many timepoints as desired for input, and aims to leverage all available data, regardless of their availability. We conduct extensive experiments on the knee osteoarthritis dataset from the Osteoarthritis Initiative for pain and Kellgren-Lawrence grade prediction at a future timepoint. We demonstrate the effectiveness of our method by comparing results from our unified model to specific models that use the same modality and view combinations during training and evaluation. We also show the benefit of having extended temporal data and provide post-hoc analysis for a deeper understanding of each modality/view's importance for different tasks.
Related papers
- DRIM: Learning Disentangled Representations from Incomplete Multimodal Healthcare Data [0.0]
Real-life medical data is often multimodal and incomplete, fueling the need for advanced deep learning models.
We introduce DRIM, a new method for capturing shared and unique representations, despite data sparsity.
Our method outperforms state-of-the-art algorithms on glioma patients survival prediction tasks, while being robust to missing modalities.
arXiv Detail & Related papers (2024-09-25T16:13:57Z) - DrFuse: Learning Disentangled Representation for Clinical Multi-Modal
Fusion with Missing Modality and Modal Inconsistency [18.291267748113142]
We propose DrFuse to achieve effective clinical multi-modal fusion.
We address the missing modality issue by disentangling the features shared across modalities and those unique within each modality.
We validate the proposed method using real-world large-scale datasets, MIMIC-IV and MIMIC-CXR.
arXiv Detail & Related papers (2024-03-10T12:41:34Z) - A Two-stream Convolutional Network for Musculoskeletal and Neurological
Disorders Prediction [14.003588854239544]
Musculoskeletal and neurological disorders are the most common causes of walking problems among older people.
Recent deep learning-based methods have shown promising results for automated analysis.
arXiv Detail & Related papers (2022-08-18T14:32:16Z) - Bridging the Gap Between Patient-specific and Patient-independent
Seizure Prediction via Knowledge Distillation [7.2666838978096875]
Existing approaches typically train models in a patient-specific fashion due to the highly personalized characteristics of epileptic signals.
A patient-specific model can then be obtained with the help of distilled knowledge and additional personalized data.
Five state-of-the-art seizure prediction methods are trained on the CHB-MIT sEEG database with our proposed scheme.
arXiv Detail & Related papers (2022-02-25T10:30:29Z) - Multi-task longitudinal forecasting with missing values on Alzheimer's
Disease [4.5855304767722185]
We propose a framework using the recently presented SSHIBA model for jointly learning different tasks on longitudinal data with missing values.
The method uses Bayesian variational inference to impute missing values and combine information of several views.
We apply this model to predict together diagnosis, ventricle volume, and clinical scores in dementia.
arXiv Detail & Related papers (2022-01-13T16:02:35Z) - MIMO: Mutual Integration of Patient Journey and Medical Ontology for
Healthcare Representation Learning [49.57261599776167]
We propose an end-to-end robust Transformer-based solution, Mutual Integration of patient journey and Medical Ontology (MIMO) for healthcare representation learning and predictive analytics.
arXiv Detail & Related papers (2021-07-20T07:04:52Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Cross-Modal Information Maximization for Medical Imaging: CMIM [62.28852442561818]
In hospitals, data are siloed to specific information systems that make the same information available under different modalities.
This offers unique opportunities to obtain and use at train-time those multiple views of the same information that might not always be available at test-time.
We propose an innovative framework that makes the most of available data by learning good representations of a multi-modal input that are resilient to modality dropping at test-time.
arXiv Detail & Related papers (2020-10-20T20:05:35Z) - BiteNet: Bidirectional Temporal Encoder Network to Predict Medical
Outcomes [53.163089893876645]
We propose a novel self-attention mechanism that captures the contextual dependency and temporal relationships within a patient's healthcare journey.
An end-to-end bidirectional temporal encoder network (BiteNet) then learns representations of the patient's journeys.
We have evaluated the effectiveness of our methods on two supervised prediction and two unsupervised clustering tasks with a real-world EHR dataset.
arXiv Detail & Related papers (2020-09-24T00:42:36Z) - Trajectories, bifurcations and pseudotime in large clinical datasets:
applications to myocardial infarction and diabetes data [94.37521840642141]
We suggest a semi-supervised methodology for the analysis of large clinical datasets, characterized by mixed data types and missing values.
The methodology is based on application of elastic principal graphs which can address simultaneously the tasks of dimensionality reduction, data visualization, clustering, feature selection and quantifying the geodesic distances (pseudotime) in partially ordered sequences of observations.
arXiv Detail & Related papers (2020-07-07T21:04:55Z) - Temporal Phenotyping using Deep Predictive Clustering of Disease
Progression [97.88605060346455]
We develop a deep learning approach for clustering time-series data, where each cluster comprises patients who share similar future outcomes of interest.
Experiments on two real-world datasets show that our model achieves superior clustering performance over state-of-the-art benchmarks.
arXiv Detail & Related papers (2020-06-15T20:48:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.