Statistical Dependency Guided Contrastive Learning for Multiple Labeling
in Prenatal Ultrasound
- URL: http://arxiv.org/abs/2108.05055v1
- Date: Wed, 11 Aug 2021 06:39:26 GMT
- Title: Statistical Dependency Guided Contrastive Learning for Multiple Labeling
in Prenatal Ultrasound
- Authors: Shuangchi He, Zehui Lin, Xin Yang, Chaoyu Chen, Jian Wang, Xue Shuang,
Ziwei Deng, Qin Liu, Yan Cao, Xiduo Lu, Ruobing Huang, Nishant Ravikumar,
Alejandro Frangi, Yuanji Zhang, Yi Xiong, Dong Ni
- Abstract summary: Standard plane recognition plays an important role in prenatal ultrasound (US) screening.
We build a novel multi-label learning scheme to identify multiple standard planes and corresponding anatomical structures simultaneously.
- Score: 56.631021151764955
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Standard plane recognition plays an important role in prenatal ultrasound
(US) screening. Automatically recognizing the standard plane along with the
corresponding anatomical structures in US image can not only facilitate US
image interpretation but also improve diagnostic efficiency. In this study, we
build a novel multi-label learning (MLL) scheme to identify multiple standard
planes and corresponding anatomical structures of fetus simultaneously. Our
contribution is three-fold. First, we represent the class correlation by word
embeddings to capture the fine-grained semantic and latent statistical
concurrency. Second, we equip the MLL with a graph convolutional network to
explore the inner and outer relationship among categories. Third, we propose a
novel cluster relabel-based contrastive learning algorithm to encourage the
divergence among ambiguous classes. Extensive validation was performed on our
large in-house dataset. Our approach reports the highest accuracy as 90.25% for
standard planes labeling, 85.59% for planes and structures labeling and mAP as
94.63%. The proposed MLL scheme provides a novel perspective for standard plane
recognition and can be easily extended to other medical image classification
tasks.
Related papers
- LoGra-Med: Long Context Multi-Graph Alignment for Medical Vision-Language Model [55.80651780294357]
State-of-the-art medical multi-modal large language models (med-MLLM) leverage instruction-following data in pre-training.
LoGra-Med is a new multi-graph alignment algorithm that enforces triplet correlations across image modalities, conversation-based descriptions, and extended captions.
Our results show LoGra-Med matches LLAVA-Med performance on 600K image-text pairs for Medical VQA and significantly outperforms it when trained on 10% of the data.
arXiv Detail & Related papers (2024-10-03T15:52:03Z) - FedMLP: Federated Multi-Label Medical Image Classification under Task Heterogeneity [30.49607763632271]
Cross-silo federated learning (FL) enables decentralized organizations to collaboratively train models while preserving data privacy.
We propose a two-stage method FedMLP to combat class missing from two aspects: pseudo label tagging and global knowledge learning.
Experiments on two publicly-available medical datasets validate the superiority of FedMLP against the state-of-the-art both federated semi-supervised and noisy label learning approaches.
arXiv Detail & Related papers (2024-06-27T08:36:43Z) - Mine yOur owN Anatomy: Revisiting Medical Image Segmentation with Extremely Limited Labels [54.58539616385138]
We introduce a novel semi-supervised 2D medical image segmentation framework termed Mine yOur owN Anatomy (MONA)
First, prior work argues that every pixel equally matters to the model training; we observe empirically that this alone is unlikely to define meaningful anatomical features.
Second, we construct a set of objectives that encourage the model to be capable of decomposing medical images into a collection of anatomical features.
arXiv Detail & Related papers (2022-09-27T15:50:31Z) - Learning Underrepresented Classes from Decentralized Partially Labeled
Medical Images [11.500033811355062]
Using decentralized data for federated training is one promising emerging research direction for alleviating data scarcity in the medical domain.
In this paper, we consider a practical yet under-explored problem, where underrepresented classes only have few labeled instances available.
We show that standard federated learning approaches fail to learn robust multi-label classifiers with extreme class imbalance.
arXiv Detail & Related papers (2022-06-30T15:28:18Z) - Few-shot image segmentation for cross-institution male pelvic organs
using registration-assisted prototypical learning [13.567073992605797]
This work presents the first 3D few-shot interclass segmentation network for medical images.
It uses a labelled multi-institution dataset from prostate cancer patients with eight regions of interest.
A built-in registration mechanism can effectively utilise the prior knowledge of consistent anatomy between subjects.
arXiv Detail & Related papers (2022-01-17T11:44:10Z) - G-MIND: An End-to-End Multimodal Imaging-Genetics Framework for
Biomarker Identification and Disease Classification [49.53651166356737]
We propose a novel deep neural network architecture to integrate imaging and genetics data, as guided by diagnosis, that provides interpretable biomarkers.
We have evaluated our model on a population study of schizophrenia that includes two functional MRI (fMRI) paradigms and Single Nucleotide Polymorphism (SNP) data.
arXiv Detail & Related papers (2021-01-27T19:28:04Z) - Learning Image Labels On-the-fly for Training Robust Classification
Models [13.669654965671604]
We show how noisy annotations (e.g., from different algorithm-based labelers) can be utilized together and mutually benefit the learning of classification tasks.
A meta-training based label-sampling module is designed to attend the labels that benefit the model learning the most through additional back-propagation processes.
arXiv Detail & Related papers (2020-09-22T05:38:44Z) - Multi-label Thoracic Disease Image Classification with Cross-Attention
Networks [65.37531731899837]
We propose a novel scheme of Cross-Attention Networks (CAN) for automated thoracic disease classification from chest x-ray images.
We also design a new loss function that beyond cross-entropy loss to help cross-attention process and is able to overcome the imbalance between classes and easy-dominated samples within each class.
arXiv Detail & Related papers (2020-07-21T14:37:00Z) - Semi-supervised Medical Image Classification with Relation-driven
Self-ensembling Model [71.80319052891817]
We present a relation-driven semi-supervised framework for medical image classification.
It exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations.
Our method outperforms many state-of-the-art semi-supervised learning methods on both single-label and multi-label image classification scenarios.
arXiv Detail & Related papers (2020-05-15T06:57:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.