ChronoFormer: Time-Aware Transformer Architectures for Structured Clinical Event Modeling
- URL: http://arxiv.org/abs/2504.07373v1
- Date: Thu, 10 Apr 2025 01:25:41 GMT
- Title: ChronoFormer: Time-Aware Transformer Architectures for Structured Clinical Event Modeling
- Authors: Yuanyun Zhang, Shi Li,
- Abstract summary: This paper proposes ChronoFormer, an innovative transformer based architecture specifically designed to encode and leverage temporal dependencies in longitudinal patient data.<n>Extensive experiments conducted on three benchmark tasks prediction, readmission prediction, and long term comorbidity onset demonstrate substantial improvements over current state of the art methods.
- Score: 3.663763802269743
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The temporal complexity of electronic health record (EHR) data presents significant challenges for predicting clinical outcomes using machine learning. This paper proposes ChronoFormer, an innovative transformer based architecture specifically designed to encode and leverage temporal dependencies in longitudinal patient data. ChronoFormer integrates temporal embeddings, hierarchical attention mechanisms, and domain specific masking techniques. Extensive experiments conducted on three benchmark tasks mortality prediction, readmission prediction, and long term comorbidity onset demonstrate substantial improvements over current state of the art methods. Furthermore, detailed analyses of attention patterns underscore ChronoFormer's capability to capture clinically meaningful long range temporal relationships.
Related papers
- Time-Aware Attention for Enhanced Electronic Health Records Modeling [8.4225455796455]
TALE-EHR is a Transformer-based framework featuring a novel time-aware attention mechanism that explicitly models continuous temporal gaps.<n>Our approach outperforms state-of-the-art baselines on tasks such as disease progression forecasting.
arXiv Detail & Related papers (2025-07-20T07:32:41Z) - Multivariate Long-term Time Series Forecasting with Fourier Neural Filter [55.09326865401653]
We introduce FNF as the backbone and DBD as architecture to provide excellent learning capabilities and optimal learning pathways for spatial-temporal modeling.<n>We show that FNF unifies local time-domain and global frequency-domain information processing within a single backbone that extends naturally to spatial modeling.
arXiv Detail & Related papers (2025-06-10T18:40:20Z) - METHOD: Modular Efficient Transformer for Health Outcome Discovery [0.25112747242081457]
This paper introduces METHOD, a novel transformer architecture specifically designed to address the challenges of clinical sequence modelling in electronic health records.<n>METHODintegrates three key innovations: (1) a patient-aware attention mechanism that prevents information leakage whilst enabling efficient batch processing; (2) an adaptive sliding window attention scheme that captures multi-scale temporal dependencies; and (3) a U-Net inspired architecture with dynamic skip connections for effective long sequence processing.<n> Evaluations on the MIMIC-IV database demonstrate that METHODconsistently outperforms the state-of-the-art ETHOSmodel
arXiv Detail & Related papers (2025-05-16T15:52:56Z) - Explainable Spatio-Temporal GCNNs for Irregular Multivariate Time Series: Architecture and Application to ICU Patient Data [7.433698348783128]
We present XST-CNN (eXG-Temporal Graph Conal Neural Network), a novel architecture for processing heterogeneous and irregular Multi Time Series (MTS) data.
Our approach captures temporal and feature within a unifiedtemporal-temporal pipeline by leveraging a GCNN pipeline.
We evaluate XST-CNN using real-world Electronic Health Record data to predict Multidrug Resistance (MDR) in ICU patients.
arXiv Detail & Related papers (2024-11-01T22:53:17Z) - Temporal Cross-Attention for Dynamic Embedding and Tokenization of Multimodal Electronic Health Records [1.6609516435725236]
We introduce a dynamic embedding and tokenization framework for precise representation of multimodal clinical time series.
Our framework outperformed baseline approaches on the task of predicting the occurrence of nine postoperative complications.
arXiv Detail & Related papers (2024-03-06T19:46:44Z) - Knowledge Enhanced Conditional Imputation for Healthcare Time-series [9.937117045677923]
Conditional Self-Attention Imputation (CSAI) is a novel recurrent neural network architecture designed to address the challenges of complex missing data patterns.
CSAI extends the current state-of-the-art neural network-based imputation methods by introducing key modifications specifically adapted to EHR data characteristics.
This work significantly advances the state of neural network imputation applied to EHRs by more closely aligning algorithmic imputation with clinical realities.
arXiv Detail & Related papers (2023-12-27T20:42:40Z) - On the Importance of Step-wise Embeddings for Heterogeneous Clinical
Time-Series [1.3285222309805063]
Recent advances in deep learning for sequence modeling have not fully transferred to tasks handling time-series from electronic health records.
In particular, in problems related to the Intensive Care Unit (ICU), the state-of-the-art remains to tackle sequence classification in a tabular manner with tree-based methods.
arXiv Detail & Related papers (2023-11-15T12:18:15Z) - ICU Mortality Prediction Using Long Short-Term Memory Networks [0.0]
We implement an automatic data-driven system, which analyzes large amounts of temporal data derived from Electronic Health Records (EHRs)
We extract high-level information so as to predict in-hospital mortality and Length of Stay (LOS) early.
Experiments highlight the efficiency of LSTM model with rigorous time-series measurements for building real-world prediction engines.
arXiv Detail & Related papers (2023-08-18T09:44:47Z) - T-Phenotype: Discovering Phenotypes of Predictive Temporal Patterns in
Disease Progression [82.85825388788567]
We develop a novel temporal clustering method, T-Phenotype, to discover phenotypes of predictive temporal patterns from labeled time-series data.
We show that T-Phenotype achieves the best phenotype discovery performance over all the evaluated baselines.
arXiv Detail & Related papers (2023-02-24T13:30:35Z) - Multivariate Time Series Forecasting with Dynamic Graph Neural ODEs [65.18780403244178]
We propose a continuous model to forecast Multivariate Time series with dynamic Graph neural Ordinary Differential Equations (MTGODE)
Specifically, we first abstract multivariate time series into dynamic graphs with time-evolving node features and unknown graph structures.
Then, we design and solve a neural ODE to complement missing graph topologies and unify both spatial and temporal message passing.
arXiv Detail & Related papers (2022-02-17T02:17:31Z) - Leveraging the structure of dynamical systems for data-driven modeling [111.45324708884813]
We consider the impact of the training set and its structure on the quality of the long-term prediction.
We show how an informed design of the training set, based on invariants of the system and the structure of the underlying attractor, significantly improves the resulting models.
arXiv Detail & Related papers (2021-12-15T20:09:20Z) - Model-Attentive Ensemble Learning for Sequence Modeling [86.4785354333566]
We present Model-Attentive Ensemble learning for Sequence modeling (MAES)
MAES is a mixture of time-series experts which leverages an attention-based gating mechanism to specialize the experts on different sequence dynamics and adaptively weight their predictions.
We demonstrate that MAES significantly out-performs popular sequence models on datasets subject to temporal shift.
arXiv Detail & Related papers (2021-02-23T05:23:35Z) - Supporting Optimal Phase Space Reconstructions Using Neural Network
Architecture for Time Series Modeling [68.8204255655161]
We propose an artificial neural network with a mechanism to implicitly learn the phase spaces properties.
Our approach is either as competitive as or better than most state-of-the-art strategies.
arXiv Detail & Related papers (2020-06-19T21:04:47Z) - Transformer Hawkes Process [79.16290557505211]
We propose a Transformer Hawkes Process (THP) model, which leverages the self-attention mechanism to capture long-term dependencies.
THP outperforms existing models in terms of both likelihood and event prediction accuracy by a notable margin.
We provide a concrete example, where THP achieves improved prediction performance for learning multiple point processes when incorporating their relational information.
arXiv Detail & Related papers (2020-02-21T13:48:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.