Towards Foundation Models Learned from Anatomy in Medical Imaging via
Self-Supervision
- URL: http://arxiv.org/abs/2309.15358v1
- Date: Wed, 27 Sep 2023 01:53:45 GMT
- Title: Towards Foundation Models Learned from Anatomy in Medical Imaging via
Self-Supervision
- Authors: Mohammad Reza Hosseinzadeh Taher, Michael B. Gotway, Jianming Liang
- Abstract summary: We envision a foundation model for medical imaging that is consciously and purposefully developed upon human anatomy.
We devise a novel self-supervised learning (SSL) strategy that exploits the hierarchical nature of human anatomy.
- Score: 8.84494874768244
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human anatomy is the foundation of medical imaging and boasts one striking
characteristic: its hierarchy in nature, exhibiting two intrinsic properties:
(1) locality: each anatomical structure is morphologically distinct from the
others; and (2) compositionality: each anatomical structure is an integrated
part of a larger whole. We envision a foundation model for medical imaging that
is consciously and purposefully developed upon this foundation to gain the
capability of "understanding" human anatomy and to possess the fundamental
properties of medical imaging. As our first step in realizing this vision
towards foundation models in medical imaging, we devise a novel self-supervised
learning (SSL) strategy that exploits the hierarchical nature of human anatomy.
Our extensive experiments demonstrate that the SSL pretrained model, derived
from our training strategy, not only outperforms state-of-the-art (SOTA)
fully/self-supervised baselines but also enhances annotation efficiency,
offering potential few-shot segmentation capabilities with performance
improvements ranging from 9% to 30% for segmentation tasks compared to SSL
baselines. This performance is attributed to the significance of anatomy
comprehension via our learning strategy, which encapsulates the intrinsic
attributes of anatomical structures-locality and compositionality-within the
embedding space, yet overlooked in existing SSL methods. All code and
pretrained models are available at https://github.com/JLiangLab/Eden.
Related papers
- ACE: Anatomically Consistent Embeddings in Composition and Decomposition [5.939793479232325]
This paper introduces a novel self-supervised learning (SSL) approach called ACE to learn anatomically consistent embedding via composition and decomposition.
Experimental results across 6 datasets 2 backbones, evaluated in few-shot learning, fine-tuning, and property analysis, show ACE's superior robustness, transferability, and clinical potential.
arXiv Detail & Related papers (2025-01-17T11:39:47Z) - Anatomy-guided Pathology Segmentation [56.883822515800205]
We develop a generalist segmentation model that combines anatomical and pathological information, aiming to enhance the segmentation accuracy of pathological features.
Our Anatomy-Pathology Exchange (APEx) training utilizes a query-based segmentation transformer which decodes a joint feature space into query-representations for human anatomy.
In doing so, we are able to report the best results across the board on FDG-PET-CT and Chest X-Ray pathology segmentation tasks with a margin of up to 3.3% as compared to strong baseline methods.
arXiv Detail & Related papers (2024-07-08T11:44:15Z) - HATs: Hierarchical Adaptive Taxonomy Segmentation for Panoramic Pathology Image Analysis [19.04633470168871]
Panoramic image segmentation in computational pathology presents a remarkable challenge due to the morphologically complex and variably scaled anatomy.
In this paper, we propose a novel Hierarchical Adaptive Taxonomy (HATs) method, which is designed to thoroughly segment panoramic views of kidney structures by leveraging detailed anatomical insights.
Our approach entails (1) the innovative HATs technique which translates spatial relationships among 15 distinct object classes into a versatile "plug-and-play" loss function that spans across regions, functional units, and cells, (2) the incorporation of anatomical hierarchies and scale considerations into a unified simple matrix representation for all panoramic entities, and (3) the
arXiv Detail & Related papers (2024-06-30T05:35:26Z) - Representing Part-Whole Hierarchies in Foundation Models by Learning Localizability, Composability, and Decomposability from Anatomy via Self-Supervision [7.869873154804936]
We introduce Adam-v2, a new self-supervised learning framework extending Adam [79].
Adam-v2 explicitly incorporates part-whole hierarchies into its learning objectives through three key branches.
Experimental results across 10 tasks, compared to 11 baselines in zero-shot, few-shot transfer, and full fine-tuning settings, showcase Adam-v2's superior performance.
arXiv Detail & Related papers (2024-04-24T06:02:59Z) - Knowledge-enhanced Visual-Language Pretraining for Computational Pathology [68.6831438330526]
We consider the problem of visual representation learning for computational pathology, by exploiting large-scale image-text pairs gathered from public resources.
We curate a pathology knowledge tree that consists of 50,470 informative attributes for 4,718 diseases requiring pathology diagnosis from 32 human tissues.
arXiv Detail & Related papers (2024-04-15T17:11:25Z) - Learning Anatomically Consistent Embedding for Chest Radiography [4.990778682575127]
This paper introduces a novel SSL approach, called PEAC (patch embedding of anatomical consistency), for medical image analysis.
Specifically, we propose to learn global and local consistencies via stable grid-based matching, transfer pre-trained PEAC models to diverse downstream tasks.
We extensively demonstrate that PEAC achieves significantly better performance than the existing state-of-the-art fully/self-supervised methods.
arXiv Detail & Related papers (2023-12-01T04:07:12Z) - S3M: Scalable Statistical Shape Modeling through Unsupervised
Correspondences [91.48841778012782]
We propose an unsupervised method to simultaneously learn local and global shape structures across population anatomies.
Our pipeline significantly improves unsupervised correspondence estimation for SSMs compared to baseline methods.
Our method is robust enough to learn from noisy neural network predictions, potentially enabling scaling SSMs to larger patient populations.
arXiv Detail & Related papers (2023-04-15T09:39:52Z) - Anatomical Invariance Modeling and Semantic Alignment for
Self-supervised Learning in 3D Medical Image Analysis [6.87667643104543]
Self-supervised learning (SSL) has recently achieved promising performance for 3D medical image analysis tasks.
Most current methods follow existing SSL paradigm originally designed for photographic or natural images.
We propose a new self-supervised learning framework, namely Alice, that explicitly fulfills Anatomical invariance modeling and semantic alignment.
arXiv Detail & Related papers (2023-02-11T06:36:20Z) - Mine yOur owN Anatomy: Revisiting Medical Image Segmentation with Extremely Limited Labels [54.58539616385138]
We introduce a novel semi-supervised 2D medical image segmentation framework termed Mine yOur owN Anatomy (MONA)
First, prior work argues that every pixel equally matters to the model training; we observe empirically that this alone is unlikely to define meaningful anatomical features.
Second, we construct a set of objectives that encourage the model to be capable of decomposing medical images into a collection of anatomical features.
arXiv Detail & Related papers (2022-09-27T15:50:31Z) - Seeking Common Ground While Reserving Differences: Multiple Anatomy
Collaborative Framework for Undersampled MRI Reconstruction [49.16058553281751]
We present a novel deep MRI reconstruction framework with both anatomy-shared and anatomy-specific parameterized learners.
Experiments on brain, knee and cardiac MRI datasets demonstrate that three of these learners are able to enhance reconstruction performance via multiple anatomy collaborative learning.
arXiv Detail & Related papers (2022-06-15T08:19:07Z) - DeepRetinotopy: Predicting the Functional Organization of Human Visual
Cortex from Structural MRI Data using Geometric Deep Learning [125.99533416395765]
We developed a deep learning model capable of exploiting the structure of the cortex to learn the complex relationship between brain function and anatomy from structural and functional MRI data.
Our model was able to predict the functional organization of human visual cortex from anatomical properties alone, and it was also able to predict nuanced variations across individuals.
arXiv Detail & Related papers (2020-05-26T04:54:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.