ECG-Byte: A Tokenizer for End-to-End Generative Electrocardiogram Language Modeling
- URL: http://arxiv.org/abs/2412.14373v1
- Date: Wed, 18 Dec 2024 22:13:21 GMT
- Title: ECG-Byte: A Tokenizer for End-to-End Generative Electrocardiogram Language Modeling
- Authors: William Han, Chaojing Duan, Michael A. Rosenberg, Emerson Liu, Ding Zhao,
- Abstract summary: ECG-Byte is a tokenizer pipeline for autoregressive language modeling of ECGs.
It compresses and encodes ECG signals into tokens, enabling end-to-end Large Language Models training.
It achieves competitive performance in NLG tasks in only half the time and 48% of the data required by two-stage approaches.
- Score: 20.484166589932702
- License:
- Abstract: Large Language Models (LLMs) have shown remarkable adaptability across domains beyond text, specifically electrocardiograms (ECGs). More specifically, there is a growing body of work exploring the task of generating text from a multi-channeled ECG and corresponding textual prompt. Current approaches typically involve pretraining an ECG-specific encoder with a self-supervised learning (SSL) objective and using the features output by the pretrained encoder to finetune a LLM for natural language generation (NLG). However, these methods are limited by 1) inefficiency from two-stage training and 2) interpretability challenges with encoder-generated features. To address these limitations, we introduce ECG-Byte, an adapted byte pair encoding (BPE) tokenizer pipeline for autoregressive language modeling of ECGs. This approach compresses and encodes ECG signals into tokens, enabling end-to-end LLM training by combining ECG and text tokens directly, while being much more interpretable since the ECG tokens can be directly mapped back to the original signal. Using ECG-Byte, we achieve competitive performance in NLG tasks in only half the time and ~48% of the data required by two-stage approaches.
Related papers
- Reading Your Heart: Learning ECG Words and Sentences via Pre-training ECG Language Model [25.131870247201636]
We introduce a novel perspective on ECG signals, treating heartbeats as words and rhythms as sentences.
We then propose HeartLang, a novel self-supervised learning framework for ECG language processing.
We construct the largest heartbeat-based ECG vocabulary to date, which will further advance the development of ECG language processing.
arXiv Detail & Related papers (2025-02-15T07:40:57Z) - AnyECG: Foundational Models for Electrocardiogram Analysis [36.53693619144332]
Electrocardiogram (ECG) is highly sensitive in detecting acute heart attacks.
This paper introduces AnyECG, a foundational model designed to extract robust representations from any real-world ECG data.
Experimental results in anomaly detection, arrhythmia detection, corrupted lead generation, and ultra-long ECG signal analysis demonstrate that AnyECG learns common ECG knowledge from data and significantly outperforms cutting-edge methods in each respective task.
arXiv Detail & Related papers (2024-11-17T17:32:58Z) - Learning General Representation of 12-Lead Electrocardiogram with a Joint-Embedding Predictive Architecture [0.0]
We introduce ECG-JEPA, a self-supervised learning model for 12-lead ECG analysis.
It learns semantic representations of ECG data by predicting in the hidden latent space.
ECG-JEPA achieves state-of-the-art performance in various downstream tasks including ECG classification and feature prediction.
arXiv Detail & Related papers (2024-10-11T06:30:48Z) - ECG Semantic Integrator (ESI): A Foundation ECG Model Pretrained with LLM-Enhanced Cardiological Text [14.06147507373525]
This study introduces a new multimodal contrastive pretaining framework that aims to improve the quality and robustness of learned representations of 12-lead ECG signals.
Our framework comprises two key components, including Cardio Query Assistant (CQA) and ECG Semantics Integrator(ESI)
arXiv Detail & Related papers (2024-05-26T06:45:39Z) - Enhancing EEG-to-Text Decoding through Transferable Representations from Pre-trained Contrastive EEG-Text Masked Autoencoder [69.7813498468116]
We propose Contrastive EEG-Text Masked Autoencoder (CET-MAE), a novel model that orchestrates compound self-supervised learning across and within EEG and text.
We also develop a framework called E2T-PTR (EEG-to-Text decoding using Pretrained Transferable Representations) to decode text from EEG sequences.
arXiv Detail & Related papers (2024-02-27T11:45:21Z) - ETP: Learning Transferable ECG Representations via ECG-Text Pre-training [10.856365645831728]
ECG-Text Pre-training (ETP) is an innovative framework designed to learn cross-modal representations that link ECG signals with textual reports.
ETP employs an ECG encoder along with a pre-trained language model to align ECG signals with their corresponding textual reports.
arXiv Detail & Related papers (2023-09-06T19:19:26Z) - Supervision-Guided Codebooks for Masked Prediction in Speech
Pre-training [102.14558233502514]
Masked prediction pre-training has seen remarkable progress in self-supervised learning (SSL) for speech recognition.
We propose two supervision-guided codebook generation approaches to improve automatic speech recognition (ASR) performance.
arXiv Detail & Related papers (2022-06-21T06:08:30Z) - Generalizing electrocardiogram delineation: training convolutional
neural networks with synthetic data augmentation [63.51064808536065]
Existing databases for ECG delineation are small, being insufficient in size and in the array of pathological conditions they represent.
This article delves has two main contributions. First, a pseudo-synthetic data generation algorithm was developed, based in probabilistically composing ECG traces given "pools" of fundamental segments, as cropped from the original databases, and a set of rules for their arrangement into coherent synthetic traces.
Second, two novel segmentation-based loss functions have been developed, which attempt at enforcing the prediction of an exact number of independent structures and at producing closer segmentation boundaries by focusing on a reduced number of samples.
arXiv Detail & Related papers (2021-11-25T10:11:41Z) - Scheduled Sampling in Vision-Language Pretraining with Decoupled
Encoder-Decoder Network [99.03895740754402]
We propose a two-stream decoupled design of encoder-decoder structure, in which two decoupled cross-modal encoder and decoder are involved.
As an alternative, we propose a primary scheduled sampling strategy that mitigates such discrepancy via pretraining encoder-decoder in a two-pass manner.
arXiv Detail & Related papers (2021-01-27T17:36:57Z) - Inductive Learning on Commonsense Knowledge Graph Completion [89.72388313527296]
Commonsense knowledge graph (CKG) is a special type of knowledge graph (CKG) where entities are composed of free-form text.
We propose to study the inductive learning setting for CKG completion where unseen entities may present at test time.
InductivE significantly outperforms state-of-the-art baselines in both standard and inductive settings on ATOMIC and ConceptNet benchmarks.
arXiv Detail & Related papers (2020-09-19T16:10:26Z) - ECG-DelNet: Delineation of Ambulatory Electrocardiograms with Mixed
Quality Labeling Using Neural Networks [69.25956542388653]
Deep learning (DL) algorithms are gaining weight in academic and industrial settings.
We demonstrate DL can be successfully applied to low interpretative tasks by embedding ECG detection and delineation onto a segmentation framework.
The model was trained using PhysioNet's QT database, comprised of 105 ambulatory ECG recordings.
arXiv Detail & Related papers (2020-05-11T16:29:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.