Self Supervised Lesion Recognition For Breast Ultrasound Diagnosis
- URL: http://arxiv.org/abs/2204.08477v1
- Date: Mon, 18 Apr 2022 16:00:33 GMT
- Title: Self Supervised Lesion Recognition For Breast Ultrasound Diagnosis
- Authors: Yuanfan Guo, Canqian Yang, Tiancheng Lin, Chunxiao Li, Rui Zhang, Yi
Xu
- Abstract summary: We propose a multi-task framework that complements Benign/Malignant classification task with lesion recognition (LR)
To be specific, LR task employs contrastive learning to encourage representation that pulls multiple views of the same lesion and repels those of different lesions.
Experiments show that the proposed multi-task framework boosts the performance of Benign/Malignant classification.
- Score: 14.961717874372567
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Previous deep learning based Computer Aided Diagnosis (CAD) system treats
multiple views of the same lesion as independent images. Since an ultrasound
image only describes a partial 2D projection of a 3D lesion, such paradigm
ignores the semantic relationship between different views of a lesion, which is
inconsistent with the traditional diagnosis where sonographers analyze a lesion
from at least two views. In this paper, we propose a multi-task framework that
complements Benign/Malignant classification task with lesion recognition (LR)
which helps leveraging relationship among multiple views of a single lesion to
learn a complete representation of the lesion. To be specific, LR task employs
contrastive learning to encourage representation that pulls multiple views of
the same lesion and repels those of different lesions. The task therefore
facilitates a representation that is not only invariant to the view change of
the lesion, but also capturing fine-grained features to distinguish between
different lesions. Experiments show that the proposed multi-task framework
boosts the performance of Benign/Malignant classification as two sub-tasks
complement each other and enhance the learned representation of ultrasound
images.
Related papers
- QUBIQ: Uncertainty Quantification for Biomedical Image Segmentation Challenge [93.61262892578067]
Uncertainty in medical image segmentation tasks, especially inter-rater variability, presents a significant challenge.
This variability directly impacts the development and evaluation of automated segmentation algorithms.
We report the set-up and summarize the benchmark results of the Quantification of Uncertainties in Biomedical Image Quantification Challenge (QUBIQ)
arXiv Detail & Related papers (2024-03-19T17:57:24Z) - Mining Gaze for Contrastive Learning toward Computer-Assisted Diagnosis [61.089776864520594]
We propose eye-tracking as an alternative to text reports for medical images.
By tracking the gaze of radiologists as they read and diagnose medical images, we can understand their visual attention and clinical reasoning.
We introduce the Medical contrastive Gaze Image Pre-training (McGIP) as a plug-and-play module for contrastive learning frameworks.
arXiv Detail & Related papers (2023-12-11T02:27:45Z) - Unified Medical Image Pre-training in Language-Guided Common Semantic Space [39.61770813855078]
We propose an Unified Medical Image Pre-training framework, namely UniMedI.
UniMedI uses diagnostic reports as common semantic space to create unified representations for diverse modalities of medical images.
We evaluate its performance on both 2D and 3D images across 10 different datasets.
arXiv Detail & Related papers (2023-11-24T22:01:12Z) - GEMTrans: A General, Echocardiography-based, Multi-Level Transformer
Framework for Cardiovascular Diagnosis [14.737295160286939]
Vision-based machine learning (ML) methods have gained popularity to act as secondary layers of verification.
We propose a General, Echo-based, Multi-Level Transformer (GEMTrans) framework that provides explainability.
We show the flexibility of our framework by considering two critical tasks including ejection fraction (EF) and aortic stenosis (AS) severity detection.
arXiv Detail & Related papers (2023-08-25T07:30:18Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - GraVIS: Grouping Augmented Views from Independent Sources for
Dermatology Analysis [52.04899592688968]
We propose GraVIS, which is specifically optimized for learning self-supervised features from dermatology images.
GraVIS significantly outperforms its transfer learning and self-supervised learning counterparts in both lesion segmentation and disease classification tasks.
arXiv Detail & Related papers (2023-01-11T11:38:37Z) - Anatomy-Aware Contrastive Representation Learning for Fetal Ultrasound [17.91546880972773]
We propose to improve visual representations of medical images via anatomy-aware contrastive learning (AWCL)
AWCL incorporates anatomy information to augment the positive/negative pair sampling in a contrastive learning manner.
Experiments on a large-scale fetal ultrasound dataset demonstrate that our approach is effective for learning representations that transfer well to three clinical downstream tasks.
arXiv Detail & Related papers (2022-08-22T22:49:26Z) - Stain based contrastive co-training for histopathological image analysis [61.87751502143719]
We propose a novel semi-supervised learning approach for classification of histovolution images.
We employ strong supervision with patch-level annotations combined with a novel co-training loss to create a semi-supervised learning framework.
We evaluate our approach in clear cell renal cell and prostate carcinomas, and demonstrate improvement over state-of-the-art semi-supervised learning methods.
arXiv Detail & Related papers (2022-06-24T22:25:31Z) - Malignancy Prediction and Lesion Identification from Clinical
Dermatological Images [65.1629311281062]
We consider machine-learning-based malignancy prediction and lesion identification from clinical dermatological images.
We first identify all lesions present in the image regardless of sub-type or likelihood of malignancy, then it estimates their likelihood of malignancy, and through aggregation, it also generates an image-level likelihood of malignancy.
arXiv Detail & Related papers (2021-04-02T20:52:05Z) - Cross Chest Graph for Disease Diagnosis with Structural Relational
Reasoning [2.7148274921314615]
Locating lesions is important in the computer-aided diagnosis of X-ray images.
General weakly-supervised methods have failed to consider the characteristics of X-ray images.
We propose the Cross-chest Graph (CCG), which improves the performance of automatic lesion detection.
arXiv Detail & Related papers (2021-01-22T08:24:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.