Unlocking the Heart Using Adaptive Locked Agnostic Networks
- URL: http://arxiv.org/abs/2309.11899v1
- Date: Thu, 21 Sep 2023 09:06:36 GMT
- Title: Unlocking the Heart Using Adaptive Locked Agnostic Networks
- Authors: Sylwia Majchrowska, Anders Hildeman, Philip Teare, Tom Diethe
- Abstract summary: Supervised training of deep learning models for medical imaging applications requires a significant amount of labeled data.
To address this limitation, we introduce the Adaptive Locked Agnostic Network (ALAN)
ALAN involves self-supervised visual feature extraction using a large backbone model to produce robust semantic self-segmentation.
Our findings demonstrate that the self-supervised backbone model robustly identifies anatomical subregions of the heart in an apical four-chamber view.
- Score: 4.613517417540153
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Supervised training of deep learning models for medical imaging applications
requires a significant amount of labeled data. This is posing a challenge as
the images are required to be annotated by medical professionals. To address
this limitation, we introduce the Adaptive Locked Agnostic Network (ALAN), a
concept involving self-supervised visual feature extraction using a large
backbone model to produce anatomically robust semantic self-segmentation. In
the ALAN methodology, this self-supervised training occurs only once on a large
and diverse dataset. Due to the intuitive interpretability of the segmentation,
downstream models tailored for specific tasks can be easily designed using
white-box models with few parameters. This, in turn, opens up the possibility
of communicating the inner workings of a model with domain experts and
introducing prior knowledge into it. It also means that the downstream models
become less data-hungry compared to fully supervised approaches. These
characteristics make ALAN particularly well-suited for resource-scarce
scenarios, such as costly clinical trials and rare diseases. In this paper, we
apply the ALAN approach to three publicly available echocardiography datasets:
EchoNet-Dynamic, CAMUS, and TMED-2. Our findings demonstrate that the
self-supervised backbone model robustly identifies anatomical subregions of the
heart in an apical four-chamber view. Building upon this, we design two
downstream models, one for segmenting a target anatomical region, and a second
for echocardiogram view classification.
Related papers
- Unsupervised Model Diagnosis [49.36194740479798]
This paper proposes Unsupervised Model Diagnosis (UMO) to produce semantic counterfactual explanations without any user guidance.
Our approach identifies and visualizes changes in semantics, and then matches these changes to attributes from wide-ranging text sources.
arXiv Detail & Related papers (2024-10-08T17:59:03Z) - Architecture Analysis and Benchmarking of 3D U-shaped Deep Learning Models for Thoracic Anatomical Segmentation [0.8897689150430447]
We conduct the first systematic benchmark study for variants of 3D U-shaped models.
Our study examines the impact of different attention mechanisms, the number of resolution stages, and network configurations on segmentation accuracy and computational complexity.
arXiv Detail & Related papers (2024-02-05T17:43:02Z) - AttResDU-Net: Medical Image Segmentation Using Attention-based Residual
Double U-Net [0.0]
This paper proposes an attention-based residual Double U-Net architecture (AttResDU-Net) that improves on the existing medical image segmentation networks.
We conducted experiments on three datasets: CVC Clinic-DB, ISIC 2018, and the 2018 Data Science Bowl datasets and achieved Dice Coefficient scores of 94.35%, 91.68%, and 92.45% respectively.
arXiv Detail & Related papers (2023-06-25T14:28:08Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Mine yOur owN Anatomy: Revisiting Medical Image Segmentation with Extremely Limited Labels [54.58539616385138]
We introduce a novel semi-supervised 2D medical image segmentation framework termed Mine yOur owN Anatomy (MONA)
First, prior work argues that every pixel equally matters to the model training; we observe empirically that this alone is unlikely to define meaningful anatomical features.
Second, we construct a set of objectives that encourage the model to be capable of decomposing medical images into a collection of anatomical features.
arXiv Detail & Related papers (2022-09-27T15:50:31Z) - IterMiUnet: A lightweight architecture for automatic blood vessel
segmentation [10.538564380139483]
This paper proposes IterMiUnet, a new lightweight convolution-based segmentation model.
It overcomes its heavily parametrized nature by incorporating the encoder-decoder structure of MiUnet model within it.
The proposed model has a lot of potential to be utilized as a tool for the early diagnosis of many diseases.
arXiv Detail & Related papers (2022-08-02T14:33:14Z) - Few-shot image segmentation for cross-institution male pelvic organs
using registration-assisted prototypical learning [13.567073992605797]
This work presents the first 3D few-shot interclass segmentation network for medical images.
It uses a labelled multi-institution dataset from prostate cancer patients with eight regions of interest.
A built-in registration mechanism can effectively utilise the prior knowledge of consistent anatomy between subjects.
arXiv Detail & Related papers (2022-01-17T11:44:10Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - G-MIND: An End-to-End Multimodal Imaging-Genetics Framework for
Biomarker Identification and Disease Classification [49.53651166356737]
We propose a novel deep neural network architecture to integrate imaging and genetics data, as guided by diagnosis, that provides interpretable biomarkers.
We have evaluated our model on a population study of schizophrenia that includes two functional MRI (fMRI) paradigms and Single Nucleotide Polymorphism (SNP) data.
arXiv Detail & Related papers (2021-01-27T19:28:04Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Progressive Adversarial Semantic Segmentation [11.323677925193438]
Deep convolutional neural networks can perform exceedingly well given full supervision.
The success of such fully-supervised models for various image analysis tasks is limited to the availability of massive amounts of labeled data.
We propose a novel end-to-end medical image segmentation model, namely Progressive Adrial Semantic (PASS)
arXiv Detail & Related papers (2020-05-08T22:48:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.