CleanCTG: A Deep Learning Model for Multi-Artefact Detection and Reconstruction in Cardiotocography
- URL: http://arxiv.org/abs/2508.10928v1
- Date: Mon, 11 Aug 2025 11:24:45 GMT
- Title: CleanCTG: A Deep Learning Model for Multi-Artefact Detection and Reconstruction in Cardiotocography
- Authors: Sheng Wong, Beth Albert, Gabriel Davis Jones,
- Abstract summary: We present CleanCTG, an end-to-end dual-stage model that first identifies multiple artefact types via multi-scale convolution and context-aware cross-attention.<n>On synthetic data, CleanCTG achieved perfect artefact detection (AU-ROC = 1.00) and reduced mean squared error (MSE) on corrupted segments to 2.74 x 10-4.<n>When integrated with the Dawes-Redman system on 933 clinical CTG recordings, denoised traces increased specificity (from 80.70% to 82.70%) and shortened median time to decision by 33%.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cardiotocography (CTG) is essential for fetal monitoring but is frequently compromised by diverse artefacts which obscure true fetal heart rate (FHR) patterns and can lead to misdiagnosis or delayed intervention. Current deep-learning approaches typically bypass comprehensive noise handling, applying minimal preprocessing or focusing solely on downstream classification, while traditional methods rely on simple interpolation or rule-based filtering that addresses only missing samples and fail to correct complex artefact types. We present CleanCTG, an end-to-end dual-stage model that first identifies multiple artefact types via multi-scale convolution and context-aware cross-attention, then reconstructs corrupted segments through artefact-specific correction branches. Training utilised over 800,000 minutes of physiologically realistic, synthetically corrupted CTGs derived from expert-verified "clean" recordings. On synthetic data, CleanCTG achieved perfect artefact detection (AU-ROC = 1.00) and reduced mean squared error (MSE) on corrupted segments to 2.74 x 10^-4 (clean-segment MSE = 2.40 x 10^-6), outperforming the next best method by more than 60%. External validation on 10,190 minutes of clinician-annotated segments yielded AU-ROC = 0.95 (sensitivity = 83.44%, specificity 94.22%), surpassing six comparator classifiers. Finally, when integrated with the Dawes-Redman system on 933 clinical CTG recordings, denoised traces increased specificity (from 80.70% to 82.70%) and shortened median time to decision by 33%. These findings suggest that explicit artefact removal and signal reconstruction can both maintain diagnostic accuracy and enable shorter monitoring sessions, offering a practical route to more reliable CTG interpretation.
Related papers
- Deep Unsupervised Anomaly Detection in Brain Imaging: Large-Scale Benchmarking and Bias Analysis [42.60508892284938]
We present a large-scale, multi-center benchmark of deep unsupervised anomaly detection for brain imaging.<n>We tested 2,221 T1w and 1,262 T2w scans spanning healthy datasets and diverse clinical cohorts.<n>Our benchmark establishes a transparent foundation for future research and highlights priorities for clinical translation.
arXiv Detail & Related papers (2025-12-01T11:03:27Z) - Neural Discrete Representation Learning for Sparse-View CBCT Reconstruction: From Algorithm Design to Prospective Multicenter Clinical Evaluation [64.42236775544579]
Cone beam computed tomography (CBCT)-guided puncture has become an established approach for diagnosing and treating thoracic tumours.<n>DeepPriorCBCT is a three-stage deep learning framework that achieves diagnostic-grade reconstruction using only one-sixth of the conventional radiation dose.
arXiv Detail & Related papers (2025-11-30T12:45:02Z) - Cancer-Net PCa-MultiSeg: Multimodal Enhancement of Prostate Cancer Lesion Segmentation Using Synthetic Correlated Diffusion Imaging [55.62977326180104]
Current deep learning approaches for prostate cancer lesion segmentation achieve limited performance.<n>We investigate synthetic correlated diffusion imaging (CDI$s$) as an enhancement to standard diffusion-based protocols.<n>Our results establish validated integration pathways for CDI$s$ as a practical drop-in enhancement for PCa lesion segmentation tasks.
arXiv Detail & Related papers (2025-11-11T04:16:12Z) - CASR-Net: An Image Processing-focused Deep Learning-based Coronary Artery Segmentation and Refinement Network for X-ray Coronary Angiogram [9.788176765955534]
Coronary artery disease (CAD) is critical for reducing mortality and improving patient treatment planning.<n>We present the Coronary Artery and Refinement Network (CASR-Net), a three-stage pipeline comprising image preprocessing, segmentation, and refinement.
arXiv Detail & Related papers (2025-10-31T09:40:29Z) - Lightweight Classifier for Detecting Intracranial Hemorrhage in Ultrasound Data [0.5461938536945722]
Intracranial hemorrhage (ICH) secondary to Traumatic Brain Injury (TBI) represents a critical diagnostic challenge.<n>Current diagnostic modalities including Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) have significant limitations.<n>This study investigates machine learning approaches for automated ICH detection using Ultrasound Tissue Pulsatility Imaging (TPI)
arXiv Detail & Related papers (2025-10-22T09:04:42Z) - A Novel Attention-Augmented Wavelet YOLO System for Real-time Brain Vessel Segmentation on Transcranial Color-coded Doppler [49.03919553747297]
We propose an AI-powered, real-time CoW auto-segmentation system capable of efficiently capturing cerebral arteries.<n>No prior studies have explored AI-driven cerebrovascular segmentation using Transcranial Color-coded Doppler (TCCD)<n>The proposed AAW-YOLO demonstrated strong performance in segmenting both ipsilateral and contralateral CoW vessels.
arXiv Detail & Related papers (2025-08-19T14:41:22Z) - Deep Learning Enabled Segmentation, Classification and Risk Assessment of Cervical Cancer [0.0]
Cervical cancer, the fourth leading cause of cancer in women globally, requires early detection through Pap smear tests.<n>In this study, we performed a focused analysis by segmenting the cellular boundaries and drawing bounding boxes to isolate the cancer cells.<n>A novel Deep Learning architecture, the Multi-Resolution Fusion Deep Convolutional Network", was proposed to effectively handle images with varying resolutions and aspect ratios.
arXiv Detail & Related papers (2025-05-21T13:25:27Z) - Fast-staged CNN Model for Accurate pulmonary diseases and Lung cancer detection [0.0]
This research evaluates a deep learning model designed to detect lung cancer, specifically pulmonary nodules, along with eight other lung pathologies, using chest radiographs.<n>A two-stage classification system, utilizing ensemble methods and transfer learning, is employed to first triage images into Normal or Abnormal.<n>The model achieves notable results in classification, with a top-performing accuracy of 77%, a sensitivity of 0.713, a specificity of 0.776 during external validation, and an AUC score of 0.888.
arXiv Detail & Related papers (2024-12-16T11:47:07Z) - SQUWA: Signal Quality Aware DNN Architecture for Enhanced Accuracy in Atrial Fibrillation Detection from Noisy PPG Signals [37.788535094404644]
Atrial fibrillation (AF) significantly increases the risk of stroke, heart disease, and mortality.
Photoplethysmography ( PPG) signals are susceptible to corruption from motion artifacts and other factors often encountered in ambulatory settings.
We propose a novel deep learning model, designed to learn how to retain accurate predictions from partially corrupted PPG.
arXiv Detail & Related papers (2024-04-15T01:07:08Z) - A Two-Stage Generative Model with CycleGAN and Joint Diffusion for
MRI-based Brain Tumor Detection [41.454028276986946]
We propose a novel framework Two-Stage Generative Model (TSGM) to improve brain tumor detection and segmentation.
CycleGAN is trained on unpaired data to generate abnormal images from healthy images as data prior.
VE-JP is implemented to reconstruct healthy images using synthetic paired abnormal images as a guide.
arXiv Detail & Related papers (2023-11-06T12:58:26Z) - Domain Transfer Through Image-to-Image Translation for Uncertainty-Aware Prostate Cancer Classification [42.75911994044675]
We present a novel approach for unpaired image-to-image translation of prostate MRIs and an uncertainty-aware training approach for classifying clinically significant PCa.
Our approach involves a novel pipeline for translating unpaired 3.0T multi-parametric prostate MRIs to 1.5T, thereby augmenting the available training data.
Our experiments demonstrate that the proposed method significantly improves the Area Under ROC Curve (AUC) by over 20% compared to the previous work.
arXiv Detail & Related papers (2023-07-02T05:26:54Z) - StRegA: Unsupervised Anomaly Detection in Brain MRIs using a Compact
Context-encoding Variational Autoencoder [48.2010192865749]
Unsupervised anomaly detection (UAD) can learn a data distribution from an unlabelled dataset of healthy subjects and then be applied to detect out of distribution samples.
This research proposes a compact version of the "context-encoding" VAE (ceVAE) model, combined with pre and post-processing steps, creating a UAD pipeline (StRegA)
The proposed pipeline achieved a Dice score of 0.642$pm$0.101 while detecting tumours in T2w images of the BraTS dataset and 0.859$pm$0.112 while detecting artificially induced anomalies.
arXiv Detail & Related papers (2022-01-31T14:27:35Z) - Learning to Automatically Diagnose Multiple Diseases in Pediatric Chest
Radiographs Using Deep Convolutional Neural Networks [0.4697611383288171]
Deep convolutional neural networks (D-CNNs) have shown remarkable performance in interpreting chest radiograph (CXR) scans in adults.
In this paper, we retrospectively collect a large dataset of 5,017 pediatric CXR scans, for which each is manually labeled by an experienced radiologist.
A D-CNN model is then trained on 3,550 annotated scans to classify multiple pediatric lung pathologies automatically.
arXiv Detail & Related papers (2021-08-14T08:14:52Z) - Automatic Breast Lesion Detection in Ultrafast DCE-MRI Using Deep
Learning [0.0]
We propose a deep learning-based computer-aided detection (CADe) method to detect breast lesions in ultrafast DCE-MRI sequences.
This method uses both the three-dimensional spatial information and temporal information obtained from the early-phase of the dynamic acquisition.
arXiv Detail & Related papers (2021-02-07T22:03:39Z) - Segmentation of the Myocardium on Late-Gadolinium Enhanced MRI based on
2.5 D Residual Squeeze and Excitation Deep Learning Model [55.09533240649176]
The aim of this work is to develop an accurate automatic segmentation method based on deep learning models for the myocardial borders on LGE-MRI.
A total number of 320 exams (with a mean number of 6 slices per exam) were used for training and 28 exams used for testing.
The performance analysis of the proposed ensemble model in the basal and middle slices was similar as compared to intra-observer study and slightly lower at apical slices.
arXiv Detail & Related papers (2020-05-27T20:44:38Z) - Lung Infection Quantification of COVID-19 in CT Images with Deep
Learning [41.35413216175024]
Deep learning system developed to automatically quantify infection regions of interest.
Human-in-the-loop strategy adopted to assist radiologists for infection region segmentation.
arXiv Detail & Related papers (2020-03-10T11:58:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.