Robust Tumor Segmentation with Hyperspectral Imaging and Graph Neural
Networks
- URL: http://arxiv.org/abs/2311.11782v1
- Date: Mon, 20 Nov 2023 14:07:38 GMT
- Title: Robust Tumor Segmentation with Hyperspectral Imaging and Graph Neural
Networks
- Authors: Mayar Lotfy, Anna Alperovich, Tommaso Giannantonio, Bjorn Barz,
Xiaohan Zhang, Felix Holm, Nassir Navab, Felix Boehm, Carolin Schwamborn,
Thomas K. Hoffmann, and Patrick J. Schuler
- Abstract summary: We propose an improved methodology that leverages the spatial context of tiles for more robust and smoother segmentation.
To address the irregular shapes of tiles, we utilize Graph Neural Networks (GNNs) to propagate context information across neighboring regions.
Our findings demonstrate that context-aware GNN algorithms can robustly find tumor demarcations on HSI images.
- Score: 31.87960207119459
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Segmenting the boundary between tumor and healthy tissue during surgical
cancer resection poses a significant challenge. In recent years, Hyperspectral
Imaging (HSI) combined with Machine Learning (ML) has emerged as a promising
solution. However, due to the extensive information contained within the
spectral domain, most ML approaches primarily classify individual HSI
(super-)pixels, or tiles, without taking into account their spatial context. In
this paper, we propose an improved methodology that leverages the spatial
context of tiles for more robust and smoother segmentation. To address the
irregular shapes of tiles, we utilize Graph Neural Networks (GNNs) to propagate
context information across neighboring regions. The features for each tile
within the graph are extracted using a Convolutional Neural Network (CNN),
which is trained simultaneously with the subsequent GNN. Moreover, we
incorporate local image quality metrics into the loss function to enhance the
training procedure's robustness against low-quality regions in the training
images. We demonstrate the superiority of our proposed method using a clinical
ex vivo dataset consisting of 51 HSI images from 30 patients. Despite the
limited dataset, the GNN-based model significantly outperforms context-agnostic
approaches, accurately distinguishing between healthy and tumor tissues, even
in images from previously unseen patients. Furthermore, we show that our
carefully designed loss function, accounting for local image quality, results
in additional improvements. Our findings demonstrate that context-aware GNN
algorithms can robustly find tumor demarcations on HSI images, ultimately
contributing to better surgery success and patient outcome.
Related papers
- Applying Conditional Generative Adversarial Networks for Imaging Diagnosis [3.881664394416534]
This study introduces an innovative application of Conditional Generative Adversarial Networks (C-GAN) integrated with Stacked Hourglass Networks (SHGN)
We address the problem of overfitting, common in deep learning models applied to complex imaging datasets, by augmenting data through rotation and scaling.
A hybrid loss function combining L1 and L2 reconstruction losses, enriched with adversarial training, is introduced to refine segmentation processes in intravascular ultrasound (IVUS) imaging.
arXiv Detail & Related papers (2024-07-17T23:23:09Z) - Connecting the Dots: Graph Neural Network Powered Ensemble and
Classification of Medical Images [0.0]
Deep learning for medical imaging is limited due to the requirement for large amounts of training data.
We employ the Image Foresting Transform to optimally segment images into superpixels.
These superpixels are subsequently transformed into graph-structured data, enabling the proficient extraction of features and modeling of relationships.
arXiv Detail & Related papers (2023-11-13T13:20:54Z) - Deepfake Image Generation for Improved Brain Tumor Segmentation [0.0]
This work investigates the feasibility of employing deep-fake image generation for effective brain tumor segmentation.
A Generative Adversarial Network was used for image-to-image translation and image segmentation using a U-Net-based convolutional neural network trained with deepfake images.
Results show improved performance in terms of image segmentation quality metrics, and could potentially assist when training with limited data.
arXiv Detail & Related papers (2023-07-26T16:11:51Z) - NexToU: Efficient Topology-Aware U-Net for Medical Image Segmentation [3.8336080345323227]
CNN and Transformer variants have emerged as the leading medical image segmentation backbones.
We propose NexToU, a novel hybrid architecture for medical image segmentation.
Our method consistently outperforms other state-of-the-art (SOTA) architectures.
arXiv Detail & Related papers (2023-05-25T10:18:57Z) - AMIGO: Sparse Multi-Modal Graph Transformer with Shared-Context
Processing for Representation Learning of Giga-pixel Images [53.29794593104923]
We present a novel concept of shared-context processing for whole slide histopathology images.
AMIGO uses the celluar graph within the tissue to provide a single representation for a patient.
We show that our model is strongly robust to missing information to an extent that it can achieve the same performance with as low as 20% of the data.
arXiv Detail & Related papers (2023-03-01T23:37:45Z) - PCRLv2: A Unified Visual Information Preservation Framework for
Self-supervised Pre-training in Medical Image Analysis [56.63327669853693]
We propose to incorporate the task of pixel restoration for explicitly encoding more pixel-level information into high-level semantics.
We also address the preservation of scale information, a powerful tool in aiding image understanding.
The proposed unified SSL framework surpasses its self-supervised counterparts on various tasks.
arXiv Detail & Related papers (2023-01-02T17:47:27Z) - How GNNs Facilitate CNNs in Mining Geometric Information from
Large-Scale Medical Images [2.2699159408903484]
We propose a fusion framework for enhancing the global image-level representation captured by convolutional neural networks (CNNs)
We evaluate our fusion strategies on histology datasets curated from large patient cohorts of colorectal and gastric cancers.
arXiv Detail & Related papers (2022-06-15T15:27:48Z) - PSGR: Pixel-wise Sparse Graph Reasoning for COVID-19 Pneumonia
Segmentation in CT Images [83.26057031236965]
We propose a pixel-wise sparse graph reasoning (PSGR) module to enhance the modeling of long-range dependencies for COVID-19 infected region segmentation in CT images.
The PSGR module avoids imprecise pixel-to-node projections and preserves the inherent information of each pixel for global reasoning.
The solution has been evaluated against four widely-used segmentation models on three public datasets.
arXiv Detail & Related papers (2021-08-09T04:58:23Z) - Global Guidance Network for Breast Lesion Segmentation in Ultrasound
Images [84.03487786163781]
We develop a deep convolutional neural network equipped with a global guidance block (GGB) and breast lesion boundary detection modules.
Our network outperforms other medical image segmentation methods and the recent semantic segmentation methods on breast ultrasound lesion segmentation.
arXiv Detail & Related papers (2021-04-05T13:15:22Z) - Improved Slice-wise Tumour Detection in Brain MRIs by Computing
Dissimilarities between Latent Representations [68.8204255655161]
Anomaly detection for Magnetic Resonance Images (MRIs) can be solved with unsupervised methods.
We have proposed a slice-wise semi-supervised method for tumour detection based on the computation of a dissimilarity function in the latent space of a Variational AutoEncoder.
We show that by training the models on higher resolution images and by improving the quality of the reconstructions, we obtain results which are comparable with different baselines.
arXiv Detail & Related papers (2020-07-24T14:02:09Z) - Inf-Net: Automatic COVID-19 Lung Infection Segmentation from CT Images [152.34988415258988]
Automated detection of lung infections from computed tomography (CT) images offers a great potential to augment the traditional healthcare strategy for tackling COVID-19.
segmenting infected regions from CT slices faces several challenges, including high variation in infection characteristics, and low intensity contrast between infections and normal tissues.
To address these challenges, a novel COVID-19 Deep Lung Infection Network (Inf-Net) is proposed to automatically identify infected regions from chest CT slices.
arXiv Detail & Related papers (2020-04-22T07:30:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.