A Joint Graph and Image Convolution Network for Automatic Brain Tumor
Segmentation
- URL: http://arxiv.org/abs/2109.05580v1
- Date: Sun, 12 Sep 2021 18:16:59 GMT
- Title: A Joint Graph and Image Convolution Network for Automatic Brain Tumor
Segmentation
- Authors: Camillo Saueressig, Adam Berkley, Reshma Munbodh, Ritambhara Singh
- Abstract summary: We present a joint graph convolution-image convolution neural network as our submission to the Brain Tumor (BraTS) 2021 challenge.
We model each brain as a graph composed of distinct image regions, which is initially segmented by a graph neural network (GNN)
The tumorous volume identified by the GNN is further refined by a simple (voxel) convolutional neural network (CNN) which produces the final segmentation.
- Score: 1.3381749415517017
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We present a joint graph convolution-image convolution neural network as our
submission to the Brain Tumor Segmentation (BraTS) 2021 challenge. We model
each brain as a graph composed of distinct image regions, which is initially
segmented by a graph neural network (GNN). Subsequently, the tumorous volume
identified by the GNN is further refined by a simple (voxel) convolutional
neural network (CNN), which produces the final segmentation. This approach
captures both global brain feature interactions via the graphical
representation and local image details through the use of convolutional
filters. We find that the GNN component by itself can effectively identify and
segment the brain tumors. The addition of the CNN further improves the median
performance of the model by 2 percent across all metrics evaluated. On the
validation set, our joint GNN-CNN model achieves mean Dice scores of 0.89,
0.81, 0.73 and mean Hausdorff distances (95th percentile) of 6.8, 12.6, 28.2mm
on the whole tumor, core tumor, and enhancing tumor, respectively.
Related papers
- Breast Ultrasound Tumor Classification Using a Hybrid Multitask
CNN-Transformer Network [63.845552349914186]
Capturing global contextual information plays a critical role in breast ultrasound (BUS) image classification.
Vision Transformers have an improved capability of capturing global contextual information but may distort the local image patterns due to the tokenization operations.
In this study, we proposed a hybrid multitask deep neural network called Hybrid-MT-ESTAN, designed to perform BUS tumor classification and segmentation.
arXiv Detail & Related papers (2023-08-04T01:19:32Z) - Multi-class Brain Tumor Segmentation using Graph Attention Network [3.3635982995145994]
This work introduces an efficient brain tumor summation model by exploiting the advancement in MRI and graph neural networks (GNNs)
The model represents the volumetric MRI as a region adjacency graph (RAG) and learns to identify the type of tumors through a graph attention network (GAT)
arXiv Detail & Related papers (2023-02-11T04:30:40Z) - 'A net for everyone': fully personalized and unsupervised neural
networks trained with longitudinal data from a single patient [0.5576716560981031]
We train personalized neural networks to detect tumor progression in longitudinal datasets.
For each patient, we trained their own neural network using just two images from different timepoints.
We show that using data from just one patient can be used to train deep neural networks to monitor tumor change.
arXiv Detail & Related papers (2022-10-25T11:07:24Z) - How GNNs Facilitate CNNs in Mining Geometric Information from
Large-Scale Medical Images [2.2699159408903484]
We propose a fusion framework for enhancing the global image-level representation captured by convolutional neural networks (CNNs)
We evaluate our fusion strategies on histology datasets curated from large patient cohorts of colorectal and gastric cancers.
arXiv Detail & Related papers (2022-06-15T15:27:48Z) - A Graphical Approach For Brain Haemorrhage Segmentation [0.0]
Haemorrhaging of the brain is the leading cause of death in people between the ages of 15 and 24.
Recent advances in Deep Learning and Image Processing have utilised different modalities like CT scans to help automate the detection and segmentation of brain haemorrhage occurrences.
arXiv Detail & Related papers (2022-02-14T17:06:32Z) - HNF-Netv2 for Brain Tumor Segmentation using multi-modal MR Imaging [86.52489226518955]
We extend our HNF-Net to HNF-Netv2 by adding inter-scale and intra-scale semantic discrimination enhancing blocks.
Our method won the RSNA 2021 Brain Tumor AI Challenge Prize (Segmentation Task)
arXiv Detail & Related papers (2022-02-10T06:34:32Z) - BiTr-Unet: a CNN-Transformer Combined Network for MRI Brain Tumor
Segmentation [2.741266294612776]
We present a CNN-Transformer combined model called BiTr-Unet for brain tumor segmentation on multi-modal MRI scans.
The proposed BiTr-Unet achieves good performance on the BraTS 2021 validation dataset with mean Dice score 0.9076, 0.8392 and 0.8231, and mean Hausdorff distance 4.5322, 13.4592 and 14.9963 for the whole tumor, tumor core, and enhancing tumor, respectively.
arXiv Detail & Related papers (2021-09-25T04:18:34Z) - PSGR: Pixel-wise Sparse Graph Reasoning for COVID-19 Pneumonia
Segmentation in CT Images [83.26057031236965]
We propose a pixel-wise sparse graph reasoning (PSGR) module to enhance the modeling of long-range dependencies for COVID-19 infected region segmentation in CT images.
The PSGR module avoids imprecise pixel-to-node projections and preserves the inherent information of each pixel for global reasoning.
The solution has been evaluated against four widely-used segmentation models on three public datasets.
arXiv Detail & Related papers (2021-08-09T04:58:23Z) - H2NF-Net for Brain Tumor Segmentation using Multimodal MR Imaging: 2nd
Place Solution to BraTS Challenge 2020 Segmentation Task [96.49879910148854]
Our H2NF-Net uses the single and cascaded HNF-Nets to segment different brain tumor sub-regions.
We trained and evaluated our model on the Multimodal Brain Tumor Challenge (BraTS) 2020 dataset.
Our method won the second place in the BraTS 2020 challenge segmentation task out of nearly 80 participants.
arXiv Detail & Related papers (2020-12-30T20:44:55Z) - Brain tumor segmentation with self-ensembled, deeply-supervised 3D U-net
neural networks: a BraTS 2020 challenge solution [56.17099252139182]
We automate and standardize the task of brain tumor segmentation with U-net like neural networks.
Two independent ensembles of models were trained, and each produced a brain tumor segmentation map.
Our solution achieved a Dice of 0.79, 0.89 and 0.84, as well as Hausdorff 95% of 20.4, 6.7 and 19.5mm on the final test dataset.
arXiv Detail & Related papers (2020-10-30T14:36:10Z) - Understanding Graph Isomorphism Network for rs-fMRI Functional
Connectivity Analysis [49.05541693243502]
We develop a framework for analyzing fMRI data using the Graph Isomorphism Network (GIN)
One of the important contributions of this paper is the observation that the GIN is a dual representation of convolutional neural network (CNN) in the graph space.
We exploit CNN-based saliency map techniques for the GNN, which we tailor to the proposed GIN with one-hot encoding.
arXiv Detail & Related papers (2020-01-10T23:40:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.