Reliable Multi-View Learning with Conformal Prediction for Aortic Stenosis Classification in Echocardiography
- URL: http://arxiv.org/abs/2409.09680v1
- Date: Sun, 15 Sep 2024 10:06:06 GMT
- Title: Reliable Multi-View Learning with Conformal Prediction for Aortic Stenosis Classification in Echocardiography
- Authors: Ang Nan Gu, Michael Tsang, Hooman Vaseli, Teresa Tsang, Purang Abolmaesumi,
- Abstract summary: The acquired images are often 2-D cross-sections of a 3-D anatomy, potentially missing important anatomical details.
We propose Re-Training for Uncertainty (RT4U), a data-centric method to introduce uncertainty to weakly informative inputs in the training set.
When combined with conformal prediction techniques, RT4U can yield adaptively sized prediction sets which are guaranteed to contain the ground truth class to a high accuracy.
- Score: 6.540741143328299
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The fundamental problem with ultrasound-guided diagnosis is that the acquired images are often 2-D cross-sections of a 3-D anatomy, potentially missing important anatomical details. This limitation leads to challenges in ultrasound echocardiography, such as poor visualization of heart valves or foreshortening of ventricles. Clinicians must interpret these images with inherent uncertainty, a nuance absent in machine learning's one-hot labels. We propose Re-Training for Uncertainty (RT4U), a data-centric method to introduce uncertainty to weakly informative inputs in the training set. This simple approach can be incorporated to existing state-of-the-art aortic stenosis classification methods to further improve their accuracy. When combined with conformal prediction techniques, RT4U can yield adaptively sized prediction sets which are guaranteed to contain the ground truth class to a high accuracy. We validate the effectiveness of RT4U on three diverse datasets: a public (TMED-2) and a private AS dataset, along with a CIFAR-10-derived toy dataset. Results show improvement on all the datasets.
Related papers
- CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - Extraction of volumetric indices from echocardiography: which deep
learning solution for clinical use? [6.144041824426555]
We show that the proposed 3D nnU-Net outperforms alternative 2D and recurrent segmentation methods.
Overall, the experimental results suggest that with sufficient training data, 3D nnU-Net could become the first automated tool to meet the standards of an everyday clinical device.
arXiv Detail & Related papers (2023-05-03T09:38:52Z) - Significantly improving zero-shot X-ray pathology classification via fine-tuning pre-trained image-text encoders [50.689585476660554]
We propose a new fine-tuning strategy that includes positive-pair loss relaxation and random sentence sampling.
Our approach consistently improves overall zero-shot pathology classification across four chest X-ray datasets and three pre-trained models.
arXiv Detail & Related papers (2022-12-14T06:04:18Z) - Data-Efficient Vision Transformers for Multi-Label Disease
Classification on Chest Radiographs [55.78588835407174]
Vision Transformers (ViTs) have not been applied to this task despite their high classification performance on generic images.
ViTs do not rely on convolutions but on patch-based self-attention and in contrast to CNNs, no prior knowledge of local connectivity is present.
Our results show that while the performance between ViTs and CNNs is on par with a small benefit for ViTs, DeiTs outperform the former if a reasonably large data set is available for training.
arXiv Detail & Related papers (2022-08-17T09:07:45Z) - Cross-Site Severity Assessment of COVID-19 from CT Images via Domain
Adaptation [64.59521853145368]
Early and accurate severity assessment of Coronavirus disease 2019 (COVID-19) based on computed tomography (CT) images offers a great help to the estimation of intensive care unit event.
To augment the labeled data and improve the generalization ability of the classification model, it is necessary to aggregate data from multiple sites.
This task faces several challenges including class imbalance between mild and severe infections, domain distribution discrepancy between sites, and presence of heterogeneous features.
arXiv Detail & Related papers (2021-09-08T07:56:51Z) - Medical Instrument Segmentation in 3D US by Hybrid Constrained
Semi-Supervised Learning [62.13520959168732]
We propose a semi-supervised learning framework for instrument segmentation in 3D US.
To achieve the SSL learning, a Dual-UNet is proposed to segment the instrument.
Our proposed method achieves Dice score of about 68.6%-69.1% and the inference time of about 1 sec. per volume.
arXiv Detail & Related papers (2021-07-30T07:59:45Z) - Variational Knowledge Distillation for Disease Classification in Chest
X-Rays [102.04931207504173]
We propose itvariational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays.
We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs.
arXiv Detail & Related papers (2021-03-19T14:13:56Z) - COVID-19 identification from volumetric chest CT scans using a
progressively resized 3D-CNN incorporating segmentation, augmentation, and
class-rebalancing [4.446085353384894]
COVID-19 is a global pandemic disease overgrowing worldwide.
Computer-aided screening tools with greater sensitivity is imperative for disease diagnosis and prognosis.
This article proposes a 3D Convolutional Neural Network (CNN)-based classification approach.
arXiv Detail & Related papers (2021-02-11T18:16:18Z) - Segmentation-free Estimation of Aortic Diameters from MRI Using Deep
Learning [2.231365407061881]
We propose a supervised deep learning method for the direct estimation of aortic diameters.
Our approach makes use of a 3D+2D convolutional neural network (CNN) that takes as input a 3D scan and outputs the aortic diameter at a given location.
Overall, the 3D+2D CNN achieved a mean absolute error between 2.2-2.4 mm depending on the considered aortic location.
arXiv Detail & Related papers (2020-09-09T18:28:00Z) - Uncertainty Estimation in Deep 2D Echocardiography Segmentation [0.2062593640149623]
Uncertainty estimates can be important when testing on data coming from a distribution further away from that of the training data.
We show how uncertainty estimation can be used to automatically reject poor quality images and improve state-of-the-art segmentation results.
arXiv Detail & Related papers (2020-05-19T10:19:23Z) - How well do U-Net-based segmentation trained on adult cardiac magnetic
resonance imaging data generalise to rare congenital heart diseases for
surgical planning? [2.330464988780586]
Planning the optimal time of intervention for pulmonary valve replacement surgery in patients with the congenital heart disease Tetralogy of Fallot (TOF) is mainly based on ventricular volume and function according to current guidelines.
In several grand challenges in the last years, U-Net architectures have shown impressive results on the provided data.
However, in clinical practice, data sets are more diverse considering individual pathologies and image properties derived from different scanner properties.
arXiv Detail & Related papers (2020-02-10T08:50:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.