Generating and Weighting Semantically Consistent Sample Pairs for
Ultrasound Contrastive Learning
- URL: http://arxiv.org/abs/2212.04097v1
- Date: Thu, 8 Dec 2022 06:24:08 GMT
- Title: Generating and Weighting Semantically Consistent Sample Pairs for
Ultrasound Contrastive Learning
- Authors: Yixiong Chen, Chunhui Zhang, Chris H. Q. Ding, Li Liu
- Abstract summary: Well-annotated medical datasets enable deep neural networks (DNNs) to gain strong power in extracting lesion-related features.
Model pre-training based on ImageNet is a common practice to gain better generalization when the data amount is limited.
In this work, we pre-trains on ultrasound (US) domains instead of ImageNet to reduce the domain gap in medical US applications.
- Score: 10.631361618707214
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Well-annotated medical datasets enable deep neural networks (DNNs) to gain
strong power in extracting lesion-related features. Building such large and
well-designed medical datasets is costly due to the need for high-level
expertise. Model pre-training based on ImageNet is a common practice to gain
better generalization when the data amount is limited. However, it suffers from
the domain gap between natural and medical images. In this work, we pre-train
DNNs on ultrasound (US) domains instead of ImageNet to reduce the domain gap in
medical US applications. To learn US image representations based on unlabeled
US videos, we propose a novel meta-learning-based contrastive learning method,
namely Meta Ultrasound Contrastive Learning (Meta-USCL). To tackle the key
challenge of obtaining semantically consistent sample pairs for contrastive
learning, we present a positive pair generation module along with an automatic
sample weighting module based on meta-learning. Experimental results on
multiple computer-aided diagnosis (CAD) problems, including pneumonia
detection, breast cancer classification, and breast tumor segmentation, show
that the proposed self-supervised method reaches state-of-the-art (SOTA). The
codes are available at https://github.com/Schuture/Meta-USCL.
Related papers
- Masked LoGoNet: Fast and Accurate 3D Image Analysis for Medical Domain [48.440691680864745]
We introduce a new neural network architecture, termed LoGoNet, with a tailored self-supervised learning (SSL) method.
LoGoNet integrates a novel feature extractor within a U-shaped architecture, leveraging Large Kernel Attention (LKA) and a dual encoding strategy.
We propose a novel SSL method tailored for 3D images to compensate for the lack of large labeled datasets.
arXiv Detail & Related papers (2024-02-09T05:06:58Z) - Enhancing Prostate Cancer Diagnosis with Deep Learning: A Study using
mpMRI Segmentation and Classification [0.0]
Prostate cancer (PCa) is a severe disease among men globally. It is important to identify PCa early and make a precise diagnosis for effective treatment.
Deep learning (DL) models can enhance existing clinical systems and improve patient care by locating regions of interest for physicians.
This work uses well-known DL models for the classification and segmentation of mpMRI images to detect PCa.
arXiv Detail & Related papers (2023-10-09T03:00:15Z) - Disruptive Autoencoders: Leveraging Low-level features for 3D Medical
Image Pre-training [51.16994853817024]
This work focuses on designing an effective pre-training framework for 3D radiology images.
We introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations.
The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-07-31T17:59:42Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Vision-Language Modelling For Radiological Imaging and Reports In The
Low Data Regime [70.04389979779195]
This paper explores training medical vision-language models (VLMs) where the visual and language inputs are embedded into a common space.
We explore several candidate methods to improve low-data performance, including adapting generic pre-trained models to novel image and text domains.
Using text-to-image retrieval as a benchmark, we evaluate the performance of these methods with variable sized training datasets of paired chest X-rays and radiological reports.
arXiv Detail & Related papers (2023-03-30T18:20:00Z) - About Explicit Variance Minimization: Training Neural Networks for
Medical Imaging With Limited Data Annotations [2.3204178451683264]
Variance Aware Training (VAT) method exploits this property by introducing the variance error into the model loss function.
We validate VAT on three medical imaging datasets from diverse domains and various learning objectives.
arXiv Detail & Related papers (2021-05-28T21:34:04Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - SAG-GAN: Semi-Supervised Attention-Guided GANs for Data Augmentation on
Medical Images [47.35184075381965]
We present a data augmentation method for generating synthetic medical images using cycle-consistency Generative Adversarial Networks (GANs)
The proposed GANs-based model can generate a tumor image from a normal image, and in turn, it can also generate a normal image from a tumor image.
We train the classification model using real images with classic data augmentation methods and classification models using synthetic images.
arXiv Detail & Related papers (2020-11-15T14:01:24Z) - Medical Image Harmonization Using Deep Learning Based Canonical Mapping:
Toward Robust and Generalizable Learning in Imaging [4.396671464565882]
We propose a new paradigm in which data from a diverse range of acquisition conditions are "harmonized" to a common reference domain.
We test this approach on two example problems, namely MRI-based brain age prediction and classification of schizophrenia.
arXiv Detail & Related papers (2020-10-11T22:01:37Z) - Improved Slice-wise Tumour Detection in Brain MRIs by Computing
Dissimilarities between Latent Representations [68.8204255655161]
Anomaly detection for Magnetic Resonance Images (MRIs) can be solved with unsupervised methods.
We have proposed a slice-wise semi-supervised method for tumour detection based on the computation of a dissimilarity function in the latent space of a Variational AutoEncoder.
We show that by training the models on higher resolution images and by improving the quality of the reconstructions, we obtain results which are comparable with different baselines.
arXiv Detail & Related papers (2020-07-24T14:02:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.