Context-aware Self-supervised Learning for Medical Images Using Graph
Neural Network
- URL: http://arxiv.org/abs/2207.02957v1
- Date: Wed, 6 Jul 2022 20:30:12 GMT
- Title: Context-aware Self-supervised Learning for Medical Images Using Graph
Neural Network
- Authors: Li Sun, Ke Yu, Kayhan Batmanghelich
- Abstract summary: We introduce a novel approach with two levels of self-supervised representation learning objectives.
We use graph neural networks to incorporate the relationship between different anatomical regions.
The structure of the graph is informed by anatomical correspondences between each patient and an anatomical atlas.
- Score: 24.890564475121238
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Although self-supervised learning enables us to bootstrap the training by
exploiting unlabeled data, the generic self-supervised methods for natural
images do not sufficiently incorporate the context. For medical images, a
desirable method should be sensitive enough to detect deviation from
normal-appearing tissue of each anatomical region; here, anatomy is the
context. We introduce a novel approach with two levels of self-supervised
representation learning objectives: one on the regional anatomical level and
another on the patient-level. We use graph neural networks to incorporate the
relationship between different anatomical regions. The structure of the graph
is informed by anatomical correspondences between each patient and an
anatomical atlas. In addition, the graph representation has the advantage of
handling any arbitrarily sized image in full resolution. Experiments on
large-scale Computer Tomography (CT) datasets of lung images show that our
approach compares favorably to baseline methods that do not account for the
context. We use the learned embedding for staging lung tissue abnormalities
related to COVID-19.
Related papers
- Connecting the Dots: Graph Neural Network Powered Ensemble and
Classification of Medical Images [0.0]
Deep learning for medical imaging is limited due to the requirement for large amounts of training data.
We employ the Image Foresting Transform to optimally segment images into superpixels.
These superpixels are subsequently transformed into graph-structured data, enabling the proficient extraction of features and modeling of relationships.
arXiv Detail & Related papers (2023-11-13T13:20:54Z) - Multimodal brain age estimation using interpretable adaptive
population-graph learning [58.99653132076496]
We propose a framework that learns a population graph structure optimized for the downstream task.
An attention mechanism assigns weights to a set of imaging and non-imaging features.
By visualizing the attention weights that were the most important for the graph construction, we increase the interpretability of the graph.
arXiv Detail & Related papers (2023-07-10T15:35:31Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Region-based Contrastive Pretraining for Medical Image Retrieval with
Anatomic Query [56.54255735943497]
Region-based contrastive pretraining for Medical Image Retrieval (RegionMIR)
We introduce a novel Region-based contrastive pretraining for Medical Image Retrieval (RegionMIR)
arXiv Detail & Related papers (2023-05-09T16:46:33Z) - Stain based contrastive co-training for histopathological image analysis [61.87751502143719]
We propose a novel semi-supervised learning approach for classification of histovolution images.
We employ strong supervision with patch-level annotations combined with a novel co-training loss to create a semi-supervised learning framework.
We evaluate our approach in clear cell renal cell and prostate carcinomas, and demonstrate improvement over state-of-the-art semi-supervised learning methods.
arXiv Detail & Related papers (2022-06-24T22:25:31Z) - ContIG: Self-supervised Multimodal Contrastive Learning for Medical
Imaging with Genetics [4.907551775445731]
We propose ContIG, a self-supervised method that can learn from large datasets of unlabeled medical images and genetic data.
Our approach aligns images and several genetic modalities in the feature space using a contrastive loss.
We also perform genome-wide association studies on the features learned by our models, uncovering interesting relationships between images and genetic data.
arXiv Detail & Related papers (2021-11-26T11:06:12Z) - GREN: Graph-Regularized Embedding Network for Weakly-Supervised Disease
Localization in X-ray images [35.18562405272593]
Cross-region and cross-image relationship, as contextual and compensating information, is vital to obtain more consistent and integral regions.
We propose the Graph Regularized Embedding Network (GREN), which leverages the intra-image and inter-image information to locate diseases on chest X-ray images.
By means of this, our approach achieves the state-of-the-art result on NIH chest X-ray dataset for weakly-supervised disease localization.
arXiv Detail & Related papers (2021-07-14T01:27:07Z) - A Survey on Graph-Based Deep Learning for Computational Histopathology [36.58189530598098]
We have witnessed a rapid expansion of the use of machine learning and deep learning for the analysis of digital pathology and biopsy image patches.
Traditional learning over patch-wise features using convolutional neural networks limits the model when attempting to capture global contextual information.
We provide a conceptual grounding of graph-based deep learning and discuss its current success for tumor localization and classification, tumor invasion and staging, image retrieval, and survival prediction.
arXiv Detail & Related papers (2021-07-01T07:50:35Z) - Context Matters: Graph-based Self-supervised Representation Learning for
Medical Images [21.23065972218941]
We introduce a novel approach with two levels of self-supervised representation learning objectives.
We use graph neural networks to incorporate the relationship between different anatomical regions.
Our model can identify clinically relevant regions in the images.
arXiv Detail & Related papers (2020-12-11T16:26:07Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Structured Landmark Detection via Topology-Adapting Deep Graph Learning [75.20602712947016]
We present a new topology-adapting deep graph learning approach for accurate anatomical facial and medical landmark detection.
The proposed method constructs graph signals leveraging both local image features and global shape features.
Experiments are conducted on three public facial image datasets (WFLW, 300W, and COFW-68) as well as three real-world X-ray medical datasets (Cephalometric (public), Hand and Pelvis)
arXiv Detail & Related papers (2020-04-17T11:55:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.