How GNNs Facilitate CNNs in Mining Geometric Information from
Large-Scale Medical Images
- URL: http://arxiv.org/abs/2206.07599v1
- Date: Wed, 15 Jun 2022 15:27:48 GMT
- Title: How GNNs Facilitate CNNs in Mining Geometric Information from
Large-Scale Medical Images
- Authors: Yiqing Shen, Bingxin Zhou, Xinye Xiong, Ruitian Gao, Yu Guang Wang
- Abstract summary: We propose a fusion framework for enhancing the global image-level representation captured by convolutional neural networks (CNNs)
We evaluate our fusion strategies on histology datasets curated from large patient cohorts of colorectal and gastric cancers.
- Score: 2.2699159408903484
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Gigapixel medical images provide massive data, both morphological textures
and spatial information, to be mined. Due to the large data scale in histology,
deep learning methods play an increasingly significant role as feature
extractors. Existing solutions heavily rely on convolutional neural networks
(CNNs) for global pixel-level analysis, leaving the underlying local geometric
structure such as the interaction between cells in the tumor microenvironment
unexplored. The topological structure in medical images, as proven to be
closely related to tumor evolution, can be well characterized by graphs. To
obtain a more comprehensive representation for downstream oncology tasks, we
propose a fusion framework for enhancing the global image-level representation
captured by CNNs with the geometry of cell-level spatial information learned by
graph neural networks (GNN). The fusion layer optimizes an integration between
collaborative features of global images and cell graphs. Two fusion strategies
have been developed: one with MLP which is simple but turns out efficient
through fine-tuning, and the other with Transformer gains a champion in fusing
multiple networks. We evaluate our fusion strategies on histology datasets
curated from large patient cohorts of colorectal and gastric cancers for three
biomarker prediction tasks. Both two models outperform plain CNNs or GNNs,
reaching a consistent AUC improvement of more than 5% on various network
backbones. The experimental results yield the necessity for combining
image-level morphological features with cell spatial relations in medical image
analysis. Codes are available at https://github.com/yiqings/HEGnnEnhanceCnn.
Related papers
- Mew: Multiplexed Immunofluorescence Image Analysis through an Efficient Multiplex Network [84.88767228835928]
We introduce Mew, a novel framework designed to efficiently process mIF images through the lens of multiplex network.
Mew innovatively constructs a multiplex network comprising two distinct layers: a Voronoi network for geometric information and a Cell-type network for capturing cell-wise homogeneity.
This framework equips a scalable and efficient Graph Neural Network (GNN), capable of processing the entire graph during training.
arXiv Detail & Related papers (2024-07-25T08:22:30Z) - Transformer-CNN Fused Architecture for Enhanced Skin Lesion Segmentation [0.0]
convolutional neural networks (CNNs) have greatly advanced medical image segmentation.
CNNs have been found to struggle with learning long-range dependencies and capturing global context.
We propose a hybrid architecture that combines the ability of transformers to capture global dependencies with the ability of CNNs to capture low-level spatial details.
arXiv Detail & Related papers (2024-01-10T18:36:14Z) - Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing [72.45257414889478]
We aim to reduce human workload by predicting connectivity between over-segmented neuron pieces.
We first construct a dataset, named FlyTracing, that contains millions of pairwise connections of segments expanding the whole fly brain.
We propose a novel connectivity-aware contrastive learning method to generate dense volumetric EM image embedding.
arXiv Detail & Related papers (2024-01-05T19:45:12Z) - Robust Tumor Segmentation with Hyperspectral Imaging and Graph Neural
Networks [31.87960207119459]
We propose an improved methodology that leverages the spatial context of tiles for more robust and smoother segmentation.
To address the irregular shapes of tiles, we utilize Graph Neural Networks (GNNs) to propagate context information across neighboring regions.
Our findings demonstrate that context-aware GNN algorithms can robustly find tumor demarcations on HSI images.
arXiv Detail & Related papers (2023-11-20T14:07:38Z) - Asymmetric Co-Training with Explainable Cell Graph Ensembling for
Histopathological Image Classification [28.949527817202984]
We propose an asymmetric co-training framework combining a deep graph convolutional network and a convolutional neural network.
We build a 14-layer deep graph convolutional network to handle cell graph data.
We evaluate our approach on the private LUAD7C and public colorectal cancer datasets.
arXiv Detail & Related papers (2023-08-24T12:27:03Z) - Breast Ultrasound Tumor Classification Using a Hybrid Multitask
CNN-Transformer Network [63.845552349914186]
Capturing global contextual information plays a critical role in breast ultrasound (BUS) image classification.
Vision Transformers have an improved capability of capturing global contextual information but may distort the local image patterns due to the tokenization operations.
In this study, we proposed a hybrid multitask deep neural network called Hybrid-MT-ESTAN, designed to perform BUS tumor classification and segmentation.
arXiv Detail & Related papers (2023-08-04T01:19:32Z) - NexToU: Efficient Topology-Aware U-Net for Medical Image Segmentation [3.8336080345323227]
CNN and Transformer variants have emerged as the leading medical image segmentation backbones.
We propose NexToU, a novel hybrid architecture for medical image segmentation.
Our method consistently outperforms other state-of-the-art (SOTA) architectures.
arXiv Detail & Related papers (2023-05-25T10:18:57Z) - AMIGO: Sparse Multi-Modal Graph Transformer with Shared-Context
Processing for Representation Learning of Giga-pixel Images [53.29794593104923]
We present a novel concept of shared-context processing for whole slide histopathology images.
AMIGO uses the celluar graph within the tissue to provide a single representation for a patient.
We show that our model is strongly robust to missing information to an extent that it can achieve the same performance with as low as 20% of the data.
arXiv Detail & Related papers (2023-03-01T23:37:45Z) - Multi-Scale Relational Graph Convolutional Network for Multiple Instance
Learning in Histopathology Images [2.6663738081163726]
We introduce the Multi-Scale Graph Convolutional Network (MS-RGCN) as a multiple learning method.
We model histopathology image patches and their relation with neighboring patches and patches at other scales as a graph.
We experiment on prostate cancer histopathology images to predict magnification groups based on the extracted features from patches.
arXiv Detail & Related papers (2022-12-17T02:26:42Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Spatio-Temporal Inception Graph Convolutional Networks for
Skeleton-Based Action Recognition [126.51241919472356]
We design a simple and highly modularized graph convolutional network architecture for skeleton-based action recognition.
Our network is constructed by repeating a building block that aggregates multi-granularity information from both the spatial and temporal paths.
arXiv Detail & Related papers (2020-11-26T14:43:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.