Anatomy-Aware Contrastive Representation Learning for Fetal Ultrasound
- URL: http://arxiv.org/abs/2208.10642v1
- Date: Mon, 22 Aug 2022 22:49:26 GMT
- Title: Anatomy-Aware Contrastive Representation Learning for Fetal Ultrasound
- Authors: Zeyu Fu, Jianbo Jiao, Robail Yasrab, Lior Drukker, Aris T.
Papageorghiou and J. Alison Noble
- Abstract summary: We propose to improve visual representations of medical images via anatomy-aware contrastive learning (AWCL)
AWCL incorporates anatomy information to augment the positive/negative pair sampling in a contrastive learning manner.
Experiments on a large-scale fetal ultrasound dataset demonstrate that our approach is effective for learning representations that transfer well to three clinical downstream tasks.
- Score: 17.91546880972773
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Self-supervised contrastive representation learning offers the advantage of
learning meaningful visual representations from unlabeled medical datasets for
transfer learning. However, applying current contrastive learning approaches to
medical data without considering its domain-specific anatomical characteristics
may lead to visual representations that are inconsistent in appearance and
semantics. In this paper, we propose to improve visual representations of
medical images via anatomy-aware contrastive learning (AWCL), which
incorporates anatomy information to augment the positive/negative pair sampling
in a contrastive learning manner. The proposed approach is demonstrated for
automated fetal ultrasound imaging tasks, enabling the positive pairs from the
same or different ultrasound scans that are anatomically similar to be pulled
together and thus improving the representation learning. We empirically
investigate the effect of inclusion of anatomy information with coarse- and
fine-grained granularity, for contrastive learning and find that learning with
fine-grained anatomy information which preserves intra-class difference is more
effective than its counterpart. We also analyze the impact of anatomy ratio on
our AWCL framework and find that using more distinct but anatomically similar
samples to compose positive pairs results in better quality representations.
Experiments on a large-scale fetal ultrasound dataset demonstrate that our
approach is effective for learning representations that transfer well to three
clinical downstream tasks, and achieves superior performance compared to
ImageNet supervised and the current state-of-the-art contrastive learning
methods. In particular, AWCL outperforms ImageNet supervised method by 13.8%
and state-of-the-art contrastive-based method by 7.1% on a cross-domain
segmentation task.
Related papers
- Multi-organ Self-supervised Contrastive Learning for Breast Lesion
Segmentation [0.0]
This paper employs multi-organ datasets for pre-training models tailored to specific organ-related target tasks.
Our target task is breast tumour segmentation in ultrasound images.
Results show that conventional contrastive learning pre-training improves performance compared to supervised baseline approaches.
arXiv Detail & Related papers (2024-02-21T20:29:21Z) - MLIP: Enhancing Medical Visual Representation with Divergence Encoder
and Knowledge-guided Contrastive Learning [48.97640824497327]
We propose a novel framework leveraging domain-specific medical knowledge as guiding signals to integrate language information into the visual domain through image-text contrastive learning.
Our model includes global contrastive learning with our designed divergence encoder, local token-knowledge-patch alignment contrastive learning, and knowledge-guided category-level contrastive learning with expert knowledge.
Notably, MLIP surpasses state-of-the-art methods even with limited annotated data, highlighting the potential of multimodal pre-training in advancing medical representation learning.
arXiv Detail & Related papers (2024-02-03T05:48:50Z) - Rethinking Semi-Supervised Medical Image Segmentation: A
Variance-Reduction Perspective [51.70661197256033]
We propose ARCO, a semi-supervised contrastive learning framework with stratified group theory for medical image segmentation.
We first propose building ARCO through the concept of variance-reduced estimation and show that certain variance-reduction techniques are particularly beneficial in pixel/voxel-level segmentation tasks.
We experimentally validate our approaches on eight benchmarks, i.e., five 2D/3D medical and three semantic segmentation datasets, with different label settings.
arXiv Detail & Related papers (2023-02-03T13:50:25Z) - GraVIS: Grouping Augmented Views from Independent Sources for
Dermatology Analysis [52.04899592688968]
We propose GraVIS, which is specifically optimized for learning self-supervised features from dermatology images.
GraVIS significantly outperforms its transfer learning and self-supervised learning counterparts in both lesion segmentation and disease classification tasks.
arXiv Detail & Related papers (2023-01-11T11:38:37Z) - Mine yOur owN Anatomy: Revisiting Medical Image Segmentation with Extremely Limited Labels [54.58539616385138]
We introduce a novel semi-supervised 2D medical image segmentation framework termed Mine yOur owN Anatomy (MONA)
First, prior work argues that every pixel equally matters to the model training; we observe empirically that this alone is unlikely to define meaningful anatomical features.
Second, we construct a set of objectives that encourage the model to be capable of decomposing medical images into a collection of anatomical features.
arXiv Detail & Related papers (2022-09-27T15:50:31Z) - FAST-AID Brain: Fast and Accurate Segmentation Tool using Artificial
Intelligence Developed for Brain [0.8376091455761259]
A novel deep learning method is proposed for fast and accurate segmentation of the human brain into 132 regions.
The proposed model uses an efficient U-Net-like network and benefits from the intersection points of different views and hierarchical relations.
The proposed method can be applied to brain MRI data including skull or any other artifacts without preprocessing the images or a drop in performance.
arXiv Detail & Related papers (2022-08-30T16:06:07Z) - Semantic segmentation of multispectral photoacoustic images using deep
learning [53.65837038435433]
Photoacoustic imaging has the potential to revolutionise healthcare.
Clinical translation of the technology requires conversion of the high-dimensional acquired data into clinically relevant and interpretable information.
We present a deep learning-based approach to semantic segmentation of multispectral photoacoustic images.
arXiv Detail & Related papers (2021-05-20T09:33:55Z) - MedAug: Contrastive learning leveraging patient metadata improves
representations for chest X-ray interpretation [8.403653472706822]
We develop a method to select positive pairs coming from views of possibly different images through the use of patient metadata.
We compare strategies for selecting positive pairs for chest X-ray interpretation including requiring them to be from the same patient, imaging study or laterality.
Our best performing positive pair selection strategy, which involves using images from the same patient from the same study across all lateralities, achieves a performance increase of 3.4% and 14.4% in mean AUC.
arXiv Detail & Related papers (2021-02-21T18:39:04Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z) - Context Matters: Graph-based Self-supervised Representation Learning for
Medical Images [21.23065972218941]
We introduce a novel approach with two levels of self-supervised representation learning objectives.
We use graph neural networks to incorporate the relationship between different anatomical regions.
Our model can identify clinically relevant regions in the images.
arXiv Detail & Related papers (2020-12-11T16:26:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.