Context Matters: Graph-based Self-supervised Representation Learning for
Medical Images
- URL: http://arxiv.org/abs/2012.06457v1
- Date: Fri, 11 Dec 2020 16:26:07 GMT
- Title: Context Matters: Graph-based Self-supervised Representation Learning for
Medical Images
- Authors: Li Sun, Ke Yu, Kayhan Batmanghelich
- Abstract summary: We introduce a novel approach with two levels of self-supervised representation learning objectives.
We use graph neural networks to incorporate the relationship between different anatomical regions.
Our model can identify clinically relevant regions in the images.
- Score: 21.23065972218941
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Supervised learning method requires a large volume of annotated datasets.
Collecting such datasets is time-consuming and expensive. Until now, very few
annotated COVID-19 imaging datasets are available. Although self-supervised
learning enables us to bootstrap the training by exploiting unlabeled data, the
generic self-supervised methods for natural images do not sufficiently
incorporate the context. For medical images, a desirable method should be
sensitive enough to detect deviation from normal-appearing tissue of each
anatomical region; here, anatomy is the context. We introduce a novel approach
with two levels of self-supervised representation learning objectives: one on
the regional anatomical level and another on the patient-level. We use graph
neural networks to incorporate the relationship between different anatomical
regions. The structure of the graph is informed by anatomical correspondences
between each patient and an anatomical atlas. In addition, the graph
representation has the advantage of handling any arbitrarily sized image in
full resolution. Experiments on large-scale Computer Tomography (CT) datasets
of lung images show that our approach compares favorably to baseline methods
that do not account for the context. We use the learnt embedding to quantify
the clinical progression of COVID-19 and show that our method generalizes well
to COVID-19 patients from different hospitals. Qualitative results suggest that
our model can identify clinically relevant regions in the images.
Related papers
- Autoregressive Sequence Modeling for 3D Medical Image Representation [48.706230961589924]
We introduce a pioneering method for learning 3D medical image representations through an autoregressive sequence pre-training framework.
Our approach various 3D medical images based on spatial, contrast, and semantic correlations, treating them as interconnected visual tokens within a token sequence.
arXiv Detail & Related papers (2024-09-13T10:19:10Z) - Connecting the Dots: Graph Neural Network Powered Ensemble and
Classification of Medical Images [0.0]
Deep learning for medical imaging is limited due to the requirement for large amounts of training data.
We employ the Image Foresting Transform to optimally segment images into superpixels.
These superpixels are subsequently transformed into graph-structured data, enabling the proficient extraction of features and modeling of relationships.
arXiv Detail & Related papers (2023-11-13T13:20:54Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Graph Self-Supervised Learning for Endoscopic Image Matching [1.8275108630751844]
We propose a novel self-supervised approach that combines Convolutional Neural Networks for capturing local visual appearance and attention-based Graph Neural Networks for modeling spatial relationships between key-points.
Our approach is trained in a fully self-supervised scheme without the need for labeled data.
Our approach outperforms state-of-the-art handcrafted and deep learning-based methods, demonstrating exceptional performance in terms of precision rate (1) and matching score (99.3%)
arXiv Detail & Related papers (2023-06-19T19:53:41Z) - Region-based Contrastive Pretraining for Medical Image Retrieval with
Anatomic Query [56.54255735943497]
Region-based contrastive pretraining for Medical Image Retrieval (RegionMIR)
We introduce a novel Region-based contrastive pretraining for Medical Image Retrieval (RegionMIR)
arXiv Detail & Related papers (2023-05-09T16:46:33Z) - Mine yOur owN Anatomy: Revisiting Medical Image Segmentation with Extremely Limited Labels [54.58539616385138]
We introduce a novel semi-supervised 2D medical image segmentation framework termed Mine yOur owN Anatomy (MONA)
First, prior work argues that every pixel equally matters to the model training; we observe empirically that this alone is unlikely to define meaningful anatomical features.
Second, we construct a set of objectives that encourage the model to be capable of decomposing medical images into a collection of anatomical features.
arXiv Detail & Related papers (2022-09-27T15:50:31Z) - Context-aware Self-supervised Learning for Medical Images Using Graph
Neural Network [24.890564475121238]
We introduce a novel approach with two levels of self-supervised representation learning objectives.
We use graph neural networks to incorporate the relationship between different anatomical regions.
The structure of the graph is informed by anatomical correspondences between each patient and an anatomical atlas.
arXiv Detail & Related papers (2022-07-06T20:30:12Z) - ContIG: Self-supervised Multimodal Contrastive Learning for Medical
Imaging with Genetics [4.907551775445731]
We propose ContIG, a self-supervised method that can learn from large datasets of unlabeled medical images and genetic data.
Our approach aligns images and several genetic modalities in the feature space using a contrastive loss.
We also perform genome-wide association studies on the features learned by our models, uncovering interesting relationships between images and genetic data.
arXiv Detail & Related papers (2021-11-26T11:06:12Z) - GREN: Graph-Regularized Embedding Network for Weakly-Supervised Disease
Localization in X-ray images [35.18562405272593]
Cross-region and cross-image relationship, as contextual and compensating information, is vital to obtain more consistent and integral regions.
We propose the Graph Regularized Embedding Network (GREN), which leverages the intra-image and inter-image information to locate diseases on chest X-ray images.
By means of this, our approach achieves the state-of-the-art result on NIH chest X-ray dataset for weakly-supervised disease localization.
arXiv Detail & Related papers (2021-07-14T01:27:07Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Dynamic Graph Correlation Learning for Disease Diagnosis with Incomplete
Labels [66.57101219176275]
Disease diagnosis on chest X-ray images is a challenging multi-label classification task.
We propose a Disease Diagnosis Graph Convolutional Network (DD-GCN) that presents a novel view of investigating the inter-dependency among different diseases.
Our method is the first to build a graph over the feature maps with a dynamic adjacency matrix for correlation learning.
arXiv Detail & Related papers (2020-02-26T17:10:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.