Learning Anatomically Consistent Embedding for Chest Radiography
- URL: http://arxiv.org/abs/2312.00335v2
- Date: Tue, 11 Jun 2024 09:17:59 GMT
- Title: Learning Anatomically Consistent Embedding for Chest Radiography
- Authors: Ziyu Zhou, Haozhe Luo, Jiaxuan Pang, Xiaowei Ding, Michael Gotway, Jianming Liang,
- Abstract summary: This paper introduces a novel SSL approach, called PEAC (patch embedding of anatomical consistency), for medical image analysis.
Specifically, we propose to learn global and local consistencies via stable grid-based matching, transfer pre-trained PEAC models to diverse downstream tasks.
We extensively demonstrate that PEAC achieves significantly better performance than the existing state-of-the-art fully/self-supervised methods.
- Score: 4.990778682575127
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised learning (SSL) approaches have recently shown substantial success in learning visual representations from unannotated images. Compared with photographic images, medical images acquired with the same imaging protocol exhibit high consistency in anatomy. To exploit this anatomical consistency, this paper introduces a novel SSL approach, called PEAC (patch embedding of anatomical consistency), for medical image analysis. Specifically, in this paper, we propose to learn global and local consistencies via stable grid-based matching, transfer pre-trained PEAC models to diverse downstream tasks, and extensively demonstrate that (1) PEAC achieves significantly better performance than the existing state-of-the-art fully/self-supervised methods, and (2) PEAC captures the anatomical structure consistency across views of the same patient and across patients of different genders, weights, and healthy statuses, which enhances the interpretability of our method for medical image analysis.
Related papers
- Efficient Few-Shot Medical Image Analysis via Hierarchical Contrastive Vision-Language Learning [44.99833362998488]
We propose Adaptive Vision-Language Fine-tuning with Hierarchical Contrastive Alignment (HiCA) for medical image analysis.
HiCA combines domain-specific pretraining and hierarchical contrastive learning to align visual and textual representations at multiple levels.
We evaluate our approach on two benchmark datasets, Chest X-ray and Breast Ultrasound.
arXiv Detail & Related papers (2025-01-16T05:01:30Z) - Continual Self-supervised Learning Considering Medical Domain Knowledge in Chest CT Images [36.88692059388115]
We propose a novel continual self-supervised learning method (CSSL) considering medical domain knowledge in chest CT images.
Our approach addresses the challenge of sequential learning by effectively capturing the relationship between previously learned knowledge and new information at different stages.
We validate our method using chest CT images obtained under two different imaging conditions, demonstrating superior performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2025-01-08T01:27:35Z) - CoBooM: Codebook Guided Bootstrapping for Medical Image Representation Learning [6.838695126692698]
Self-supervised learning has emerged as a promising paradigm for medical image analysis by harnessing unannotated data.
Existing SSL approaches overlook the high anatomical similarity inherent in medical images.
We propose CoBooM, a novel framework for self-supervised medical image learning by integrating continuous and discrete representations.
arXiv Detail & Related papers (2024-08-08T06:59:32Z) - Towards Foundation Models Learned from Anatomy in Medical Imaging via
Self-Supervision [8.84494874768244]
We envision a foundation model for medical imaging that is consciously and purposefully developed upon human anatomy.
We devise a novel self-supervised learning (SSL) strategy that exploits the hierarchical nature of human anatomy.
arXiv Detail & Related papers (2023-09-27T01:53:45Z) - Anatomical Invariance Modeling and Semantic Alignment for
Self-supervised Learning in 3D Medical Image Analysis [6.87667643104543]
Self-supervised learning (SSL) has recently achieved promising performance for 3D medical image analysis tasks.
Most current methods follow existing SSL paradigm originally designed for photographic or natural images.
We propose a new self-supervised learning framework, namely Alice, that explicitly fulfills Anatomical invariance modeling and semantic alignment.
arXiv Detail & Related papers (2023-02-11T06:36:20Z) - GraVIS: Grouping Augmented Views from Independent Sources for
Dermatology Analysis [52.04899592688968]
We propose GraVIS, which is specifically optimized for learning self-supervised features from dermatology images.
GraVIS significantly outperforms its transfer learning and self-supervised learning counterparts in both lesion segmentation and disease classification tasks.
arXiv Detail & Related papers (2023-01-11T11:38:37Z) - Mine yOur owN Anatomy: Revisiting Medical Image Segmentation with Extremely Limited Labels [54.58539616385138]
We introduce a novel semi-supervised 2D medical image segmentation framework termed Mine yOur owN Anatomy (MONA)
First, prior work argues that every pixel equally matters to the model training; we observe empirically that this alone is unlikely to define meaningful anatomical features.
Second, we construct a set of objectives that encourage the model to be capable of decomposing medical images into a collection of anatomical features.
arXiv Detail & Related papers (2022-09-27T15:50:31Z) - Stain based contrastive co-training for histopathological image analysis [61.87751502143719]
We propose a novel semi-supervised learning approach for classification of histovolution images.
We employ strong supervision with patch-level annotations combined with a novel co-training loss to create a semi-supervised learning framework.
We evaluate our approach in clear cell renal cell and prostate carcinomas, and demonstrate improvement over state-of-the-art semi-supervised learning methods.
arXiv Detail & Related papers (2022-06-24T22:25:31Z) - Incremental Cross-view Mutual Distillation for Self-supervised Medical
CT Synthesis [88.39466012709205]
This paper builds a novel medical slice to increase the between-slice resolution.
Considering that the ground-truth intermediate medical slices are always absent in clinical practice, we introduce the incremental cross-view mutual distillation strategy.
Our method outperforms state-of-the-art algorithms by clear margins.
arXiv Detail & Related papers (2021-12-20T03:38:37Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.