Learning Inter-Modal Correspondence and Phenotypes from Multi-Modal
Electronic Health Records
- URL: http://arxiv.org/abs/2011.06301v1
- Date: Thu, 12 Nov 2020 10:30:29 GMT
- Title: Learning Inter-Modal Correspondence and Phenotypes from Multi-Modal
Electronic Health Records
- Authors: Kejing Yin, William K. Cheung, Benjamin C. M. Fung, Jonathan Poon
- Abstract summary: We propose cHITF to infer the correspondence between multiple modalities jointly with the phenotype discovery.
Experiments conducted on the real-world MIMIC-III dataset demonstrate that cHITF effectively infers clinically meaningful inter-modal correspondence.
- Score: 15.658012300789148
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Non-negative tensor factorization has been shown a practical solution to
automatically discover phenotypes from the electronic health records (EHR) with
minimal human supervision. Such methods generally require an input tensor
describing the inter-modal interactions to be pre-established; however, the
correspondence between different modalities (e.g., correspondence between
medications and diagnoses) can often be missing in practice. Although heuristic
methods can be applied to estimate them, they inevitably introduce errors, and
leads to sub-optimal phenotype quality. This is particularly important for
patients with complex health conditions (e.g., in critical care) as multiple
diagnoses and medications are simultaneously present in the records. To
alleviate this problem and discover phenotypes from EHR with unobserved
inter-modal correspondence, we propose the collective hidden interaction tensor
factorization (cHITF) to infer the correspondence between multiple modalities
jointly with the phenotype discovery. We assume that the observed matrix for
each modality is marginalization of the unobserved inter-modal correspondence,
which are reconstructed by maximizing the likelihood of the observed matrices.
Extensive experiments conducted on the real-world MIMIC-III dataset demonstrate
that cHITF effectively infers clinically meaningful inter-modal correspondence,
discovers phenotypes that are more clinically relevant and diverse, and
achieves better predictive performance compared with a number of
state-of-the-art computational phenotyping models.
Related papers
- CTPD: Cross-Modal Temporal Pattern Discovery for Enhanced Multimodal Electronic Health Records Analysis [46.56667527672019]
We introduce a Cross-Modal Temporal Pattern Discovery (CTPD) framework, designed to efficiently extract meaningful cross-modal temporal patterns from multimodal EHR data.
Our approach introduces shared initial temporal pattern representations which are refined using slot attention to generate temporal semantic embeddings.
arXiv Detail & Related papers (2024-11-01T15:54:07Z) - Deep State-Space Generative Model For Correlated Time-to-Event Predictions [54.3637600983898]
We propose a deep latent state-space generative model to capture the interactions among different types of correlated clinical events.
Our method also uncovers meaningful insights about the latent correlations among mortality and different types of organ failures.
arXiv Detail & Related papers (2024-07-28T02:42:36Z) - DrFuse: Learning Disentangled Representation for Clinical Multi-Modal
Fusion with Missing Modality and Modal Inconsistency [18.291267748113142]
We propose DrFuse to achieve effective clinical multi-modal fusion.
We address the missing modality issue by disentangling the features shared across modalities and those unique within each modality.
We validate the proposed method using real-world large-scale datasets, MIMIC-IV and MIMIC-CXR.
arXiv Detail & Related papers (2024-03-10T12:41:34Z) - Cross-Attention is Not Enough: Incongruity-Aware Dynamic Hierarchical
Fusion for Multimodal Affect Recognition [69.32305810128994]
Incongruity between modalities poses a challenge for multimodal fusion, especially in affect recognition.
We propose the Hierarchical Crossmodal Transformer with Dynamic Modality Gating (HCT-DMG), a lightweight incongruity-aware model.
HCT-DMG: 1) outperforms previous multimodal models with a reduced size of approximately 0.8M parameters; 2) recognizes hard samples where incongruity makes affect recognition difficult; 3) mitigates the incongruity at the latent level in crossmodal attention.
arXiv Detail & Related papers (2023-05-23T01:24:15Z) - PheME: A deep ensemble framework for improving phenotype prediction from
multi-modal data [42.56953523499849]
We present PheME, an Ensemble framework using Multi-modality data of structured EHRs and unstructured clinical notes for accurate Phenotype prediction.
We leverage ensemble learning to combine outputs from single-modal models and multi-modal models to improve phenotype predictions.
arXiv Detail & Related papers (2023-03-19T23:41:04Z) - T-Phenotype: Discovering Phenotypes of Predictive Temporal Patterns in
Disease Progression [82.85825388788567]
We develop a novel temporal clustering method, T-Phenotype, to discover phenotypes of predictive temporal patterns from labeled time-series data.
We show that T-Phenotype achieves the best phenotype discovery performance over all the evaluated baselines.
arXiv Detail & Related papers (2023-02-24T13:30:35Z) - Unsupervised EHR-based Phenotyping via Matrix and Tensor Decompositions [0.6875312133832078]
We provide a comprehensive review of low-rank approximation-based approaches for computational phenotyping.
Recent developments have adapted low-rank data approximation methods by incorporating different constraints and regularizations that facilitate interpretability further.
arXiv Detail & Related papers (2022-09-01T09:47:27Z) - Multi-modal Graph Learning for Disease Prediction [35.4310911850558]
We propose an end-to-end Multimodal Graph Learning framework (MMGL) for disease prediction.
Instead of defining the adjacency matrix manually as existing methods, the latent graph structure can be captured through a novel way of adaptive graph learning.
arXiv Detail & Related papers (2021-07-01T03:59:22Z) - Learning Multimodal VAEs through Mutual Supervision [72.77685889312889]
MEME combines information between modalities implicitly through mutual supervision.
We demonstrate that MEME outperforms baselines on standard metrics across both partial and complete observation schemes.
arXiv Detail & Related papers (2021-06-23T17:54:35Z) - A Variational Information Bottleneck Approach to Multi-Omics Data
Integration [98.6475134630792]
We propose a deep variational information bottleneck (IB) approach for incomplete multi-view observations.
Our method applies the IB framework on marginal and joint representations of the observed views to focus on intra-view and inter-view interactions that are relevant for the target.
Experiments on real-world datasets show that our method consistently achieves gain from data integration and outperforms state-of-the-art benchmarks.
arXiv Detail & Related papers (2021-02-05T06:05:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.