ECHOPulse: ECG controlled echocardio-grams video generation
- URL: http://arxiv.org/abs/2410.03143v2
- Date: Sat, 12 Oct 2024 01:22:27 GMT
- Title: ECHOPulse: ECG controlled echocardio-grams video generation
- Authors: Yiwei Li, Sekeun Kim, Zihao Wu, Hanqi Jiang, Yi Pan, Pengfei Jin, Sifan Song, Yucheng Shi, Tianming Liu, Quanzheng Li, Xiang Li,
- Abstract summary: Echocardiography (ECHO) is essential for cardiac assessments.
ECHO video generation offers a solution by improving automated monitoring.
ECHOPULSE is an ECG-conditioned ECHO video generation model.
- Score: 30.753399869167588
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Echocardiography (ECHO) is essential for cardiac assessments, but its video quality and interpretation heavily relies on manual expertise, leading to inconsistent results from clinical and portable devices. ECHO video generation offers a solution by improving automated monitoring through synthetic data and generating high-quality videos from routine health data. However, existing models often face high computational costs, slow inference, and rely on complex conditional prompts that require experts' annotations. To address these challenges, we propose ECHOPULSE, an ECG-conditioned ECHO video generation model. ECHOPULSE introduces two key advancements: (1) it accelerates ECHO video generation by leveraging VQ-VAE tokenization and masked visual token modeling for fast decoding, and (2) it conditions on readily accessible ECG signals, which are highly coherent with ECHO videos, bypassing complex conditional prompts. To the best of our knowledge, this is the first work to use time-series prompts like ECG signals for ECHO video generation. ECHOPULSE not only enables controllable synthetic ECHO data generation but also provides updated cardiac function information for disease monitoring and prediction beyond ECG alone. Evaluations on three public and private datasets demonstrate state-of-the-art performance in ECHO video generation across both qualitative and quantitative measures. Additionally, ECHOPULSE can be easily generalized to other modality generation tasks, such as cardiac MRI, fMRI, and 3D CT generation. Demo can seen from \url{https://github.com/levyisthebest/ECHOPulse_Prelease}.
Related papers
- Synthetic Time Series Data Generation for Healthcare Applications: A PCG Case Study [43.28613210217385]
We employ and compare three state-of-the-art generative models to generate PCG data.
Our results demonstrate that the generated PCG data closely resembles the original datasets.
In our future work, we plan to incorporate this method into a data augmentation pipeline to synthesize abnormal PCG signals with heart murmurs.
arXiv Detail & Related papers (2024-12-17T18:07:40Z) - CognitionCapturer: Decoding Visual Stimuli From Human EEG Signal With Multimodal Information [61.1904164368732]
We propose CognitionCapturer, a unified framework that fully leverages multimodal data to represent EEG signals.
Specifically, CognitionCapturer trains Modality Experts for each modality to extract cross-modal information from the EEG modality.
The framework does not require any fine-tuning of the generative models and can be extended to incorporate more modalities.
arXiv Detail & Related papers (2024-12-13T16:27:54Z) - AnyECG: Foundational Models for Electrocardiogram Analysis [36.53693619144332]
Electrocardiogram (ECG) is highly sensitive in detecting acute heart attacks.
This paper introduces AnyECG, a foundational model designed to extract robust representations from any real-world ECG data.
Experimental results in anomaly detection, arrhythmia detection, corrupted lead generation, and ultra-long ECG signal analysis demonstrate that AnyECG learns common ECG knowledge from data and significantly outperforms cutting-edge methods in each respective task.
arXiv Detail & Related papers (2024-11-17T17:32:58Z) - HeartBeat: Towards Controllable Echocardiography Video Synthesis with Multimodal Conditions-Guided Diffusion Models [14.280181445804226]
We propose a novel framework named HeartBeat towards controllable and high-fidelity ECHO video synthesis.
HeartBeat serves as a unified framework that enables perceiving multimodal conditions simultaneously to guide controllable generation.
In this way, users can synthesize ECHO videos that conform to their mental imagery by combining multimodal control signals.
arXiv Detail & Related papers (2024-06-20T08:24:28Z) - CoReEcho: Continuous Representation Learning for 2D+time Echocardiography Analysis [42.810247034149214]
We propose CoReEcho, a novel training framework emphasizing continuous representations tailored for direct EF regression.
CoReEcho: 1) outperforms the current state-of-the-art (SOTA) on the largest echocardiography dataset (EchoNet-Dynamic) with MAE of 3.90 & R2 of 82.44, and 2) provides robust and generalizable features that transfer more effectively in related downstream tasks.
arXiv Detail & Related papers (2024-03-15T10:18:06Z) - PulseNet: Deep Learning ECG-signal classification using random
augmentation policy and continous wavelet transform for canines [46.09869227806991]
evaluating canine electrocardiograms (ECG) require skilled veterinarians.
Current availability of veterinary cardiologists for ECG interpretation and diagnostic support is limited.
We implement a deep convolutional neural network (CNN) approach for classifying canine electrocardiogram sequences as either normal or abnormal.
arXiv Detail & Related papers (2023-05-17T09:06:39Z) - Text-to-ECG: 12-Lead Electrocardiogram Synthesis conditioned on Clinical
Text Reports [6.659609788411503]
We present a text-to-ECG task, in which textual inputs are used to produce ECG outputs.
We propose Auto-TTE, an autoregressive generative model conditioned on clinical text reports to synthesize 12-lead ECGs.
arXiv Detail & Related papers (2023-03-09T11:58:38Z) - Leveraging Statistical Shape Priors in GAN-based ECG Synthesis [3.3482093430607267]
We propose a novel approach for ECG signal generation using Generative Adversarial Networks (GANs) and statistical ECG data modeling.
Our approach leverages prior knowledge about ECG dynamics to synthesize realistic signals, addressing the complex dynamics of ECG signals.
Our results demonstrate that our approach, which models temporal and amplitude variations of ECG signals as 2-D shapes, generates more realistic signals compared to state-of-the-art GAN based generation baselines.
arXiv Detail & Related papers (2022-10-22T18:06:11Z) - ME-GAN: Learning Panoptic Electrocardio Representations for Multi-view
ECG Synthesis Conditioned on Heart Diseases [24.52989747071257]
We propose a disease-aware generative adversarial network for multi-view ECG synthesis called ME-GAN.
Since ECG manifestations of heart diseases are often localized in specific waveforms, we propose a new "mixup normalization" to inject disease information precisely into suitable locations.
Comprehensive experiments verify that our ME-GAN performs well on multi-view ECG signal synthesis with trusty morbid manifestations.
arXiv Detail & Related papers (2022-07-21T14:14:02Z) - Generalizing electrocardiogram delineation: training convolutional
neural networks with synthetic data augmentation [63.51064808536065]
Existing databases for ECG delineation are small, being insufficient in size and in the array of pathological conditions they represent.
This article delves has two main contributions. First, a pseudo-synthetic data generation algorithm was developed, based in probabilistically composing ECG traces given "pools" of fundamental segments, as cropped from the original databases, and a set of rules for their arrangement into coherent synthetic traces.
Second, two novel segmentation-based loss functions have been developed, which attempt at enforcing the prediction of an exact number of independent structures and at producing closer segmentation boundaries by focusing on a reduced number of samples.
arXiv Detail & Related papers (2021-11-25T10:11:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.