Towards Foundation Models Learned from Anatomy in Medical Imaging via
Self-Supervision
- URL: http://arxiv.org/abs/2309.15358v1
- Date: Wed, 27 Sep 2023 01:53:45 GMT
- Title: Towards Foundation Models Learned from Anatomy in Medical Imaging via
Self-Supervision
- Authors: Mohammad Reza Hosseinzadeh Taher, Michael B. Gotway, Jianming Liang
- Abstract summary: We envision a foundation model for medical imaging that is consciously and purposefully developed upon human anatomy.
We devise a novel self-supervised learning (SSL) strategy that exploits the hierarchical nature of human anatomy.
- Score: 8.84494874768244
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human anatomy is the foundation of medical imaging and boasts one striking
characteristic: its hierarchy in nature, exhibiting two intrinsic properties:
(1) locality: each anatomical structure is morphologically distinct from the
others; and (2) compositionality: each anatomical structure is an integrated
part of a larger whole. We envision a foundation model for medical imaging that
is consciously and purposefully developed upon this foundation to gain the
capability of "understanding" human anatomy and to possess the fundamental
properties of medical imaging. As our first step in realizing this vision
towards foundation models in medical imaging, we devise a novel self-supervised
learning (SSL) strategy that exploits the hierarchical nature of human anatomy.
Our extensive experiments demonstrate that the SSL pretrained model, derived
from our training strategy, not only outperforms state-of-the-art (SOTA)
fully/self-supervised baselines but also enhances annotation efficiency,
offering potential few-shot segmentation capabilities with performance
improvements ranging from 9% to 30% for segmentation tasks compared to SSL
baselines. This performance is attributed to the significance of anatomy
comprehension via our learning strategy, which encapsulates the intrinsic
attributes of anatomical structures-locality and compositionality-within the
embedding space, yet overlooked in existing SSL methods. All code and
pretrained models are available at https://github.com/JLiangLab/Eden.
Related papers
- Anatomy-guided Pathology Segmentation [56.883822515800205]
We develop a generalist segmentation model that combines anatomical and pathological information, aiming to enhance the segmentation accuracy of pathological features.
Our Anatomy-Pathology Exchange (APEx) training utilizes a query-based segmentation transformer which decodes a joint feature space into query-representations for human anatomy.
In doing so, we are able to report the best results across the board on FDG-PET-CT and Chest X-Ray pathology segmentation tasks with a margin of up to 3.3% as compared to strong baseline methods.
arXiv Detail & Related papers (2024-07-08T11:44:15Z) - HATs: Hierarchical Adaptive Taxonomy Segmentation for Panoramic Pathology Image Analysis [19.04633470168871]
Panoramic image segmentation in computational pathology presents a remarkable challenge due to the morphologically complex and variably scaled anatomy.
In this paper, we propose a novel Hierarchical Adaptive Taxonomy (HATs) method, which is designed to thoroughly segment panoramic views of kidney structures by leveraging detailed anatomical insights.
Our approach entails (1) the innovative HATs technique which translates spatial relationships among 15 distinct object classes into a versatile "plug-and-play" loss function that spans across regions, functional units, and cells, (2) the incorporation of anatomical hierarchies and scale considerations into a unified simple matrix representation for all panoramic entities, and (3) the
arXiv Detail & Related papers (2024-06-30T05:35:26Z) - Representing Part-Whole Hierarchies in Foundation Models by Learning Localizability, Composability, and Decomposability from Anatomy via Self-Supervision [7.869873154804936]
We introduce Adam-v2, a new self-supervised learning framework extending Adam [79].
Adam-v2 explicitly incorporates part-whole hierarchies into its learning objectives through three key branches.
Experimental results across 10 tasks, compared to 11 baselines in zero-shot, few-shot transfer, and full fine-tuning settings, showcase Adam-v2's superior performance.
arXiv Detail & Related papers (2024-04-24T06:02:59Z) - Learning Anatomically Consistent Embedding for Chest Radiography [4.990778682575127]
This paper introduces a novel SSL approach, called PEAC (patch embedding of anatomical consistency), for medical image analysis.
Specifically, we propose to learn global and local consistencies via stable grid-based matching, transfer pre-trained PEAC models to diverse downstream tasks.
We extensively demonstrate that PEAC achieves significantly better performance than the existing state-of-the-art fully/self-supervised methods.
arXiv Detail & Related papers (2023-12-01T04:07:12Z) - Region-based Contrastive Pretraining for Medical Image Retrieval with
Anatomic Query [56.54255735943497]
Region-based contrastive pretraining for Medical Image Retrieval (RegionMIR)
We introduce a novel Region-based contrastive pretraining for Medical Image Retrieval (RegionMIR)
arXiv Detail & Related papers (2023-05-09T16:46:33Z) - S3M: Scalable Statistical Shape Modeling through Unsupervised
Correspondences [91.48841778012782]
We propose an unsupervised method to simultaneously learn local and global shape structures across population anatomies.
Our pipeline significantly improves unsupervised correspondence estimation for SSMs compared to baseline methods.
Our method is robust enough to learn from noisy neural network predictions, potentially enabling scaling SSMs to larger patient populations.
arXiv Detail & Related papers (2023-04-15T09:39:52Z) - Anatomical Invariance Modeling and Semantic Alignment for
Self-supervised Learning in 3D Medical Image Analysis [6.87667643104543]
Self-supervised learning (SSL) has recently achieved promising performance for 3D medical image analysis tasks.
Most current methods follow existing SSL paradigm originally designed for photographic or natural images.
We propose a new self-supervised learning framework, namely Alice, that explicitly fulfills Anatomical invariance modeling and semantic alignment.
arXiv Detail & Related papers (2023-02-11T06:36:20Z) - Seeking Common Ground While Reserving Differences: Multiple Anatomy
Collaborative Framework for Undersampled MRI Reconstruction [49.16058553281751]
We present a novel deep MRI reconstruction framework with both anatomy-shared and anatomy-specific parameterized learners.
Experiments on brain, knee and cardiac MRI datasets demonstrate that three of these learners are able to enhance reconstruction performance via multiple anatomy collaborative learning.
arXiv Detail & Related papers (2022-06-15T08:19:07Z) - PGL: Prior-Guided Local Self-supervised Learning for 3D Medical Image
Segmentation [87.50205728818601]
We propose a PriorGuided Local (PGL) self-supervised model that learns the region-wise local consistency in the latent feature space.
Our PGL model learns the distinctive representations of local regions, and hence is able to retain structural information.
arXiv Detail & Related papers (2020-11-25T11:03:11Z) - Learning to Segment Anatomical Structures Accurately from One Exemplar [34.287877547953194]
Methods that permit to produce accurate anatomical structure segmentation without using a large amount of fully annotated training images are highly desirable.
We propose Contour Transformer Network (CTN), a one-shot anatomy segmentor including a naturally built-in human-in-the-loop mechanism.
We demonstrate that our one-shot learning method significantly outperforms non-learning-based methods and performs competitively to the state-of-the-art fully supervised deep learning approaches.
arXiv Detail & Related papers (2020-07-06T20:27:38Z) - DeepRetinotopy: Predicting the Functional Organization of Human Visual
Cortex from Structural MRI Data using Geometric Deep Learning [125.99533416395765]
We developed a deep learning model capable of exploiting the structure of the cortex to learn the complex relationship between brain function and anatomy from structural and functional MRI data.
Our model was able to predict the functional organization of human visual cortex from anatomical properties alone, and it was also able to predict nuanced variations across individuals.
arXiv Detail & Related papers (2020-05-26T04:54:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.