Moving from 2D to 3D: volumetric medical image classification for rectal
cancer staging
- URL: http://arxiv.org/abs/2209.05771v1
- Date: Tue, 13 Sep 2022 07:10:14 GMT
- Title: Moving from 2D to 3D: volumetric medical image classification for rectal
cancer staging
- Authors: Joohyung Lee, Jieun Oh, Inkyu Shin, You-sung Kim, Dae Kyung Sohn,
Tae-sung Kim, In So Kweon
- Abstract summary: preoperative discrimination between T2 and T3 stages is arguably both the most challenging and clinically significant task for rectal cancer treatment.
We present a volumetric convolutional neural network to accurately discriminate T2 from T3 stage rectal cancer with rectal MR volumes.
- Score: 62.346649719614
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Volumetric images from Magnetic Resonance Imaging (MRI) provide invaluable
information in preoperative staging of rectal cancer. Above all, accurate
preoperative discrimination between T2 and T3 stages is arguably both the most
challenging and clinically significant task for rectal cancer treatment, as
chemo-radiotherapy is usually recommended to patients with T3 (or greater)
stage cancer. In this study, we present a volumetric convolutional neural
network to accurately discriminate T2 from T3 stage rectal cancer with rectal
MR volumes. Specifically, we propose 1) a custom ResNet-based volume encoder
that models the inter-slice relationship with late fusion (i.e., 3D convolution
at the last layer), 2) a bilinear computation that aggregates the resulting
features from the encoder to create a volume-wise feature, and 3) a joint
minimization of triplet loss and focal loss. With MR volumes of pathologically
confirmed T2/T3 rectal cancer, we perform extensive experiments to compare
various designs within the framework of residual learning. As a result, our
network achieves an AUC of 0.831, which is higher than the reported accuracy of
the professional radiologist groups. We believe this method can be extended to
other volume analysis tasks
Related papers
- Brain Tumor Classification on MRI in Light of Molecular Markers [61.77272414423481]
Co-deletion of the 1p/19q gene is associated with clinical outcomes in low-grade gliomas.
This study aims to utilize a specially MRI-based convolutional neural network for brain cancer detection.
arXiv Detail & Related papers (2024-09-29T07:04:26Z) - Slice-Consistent 3D Volumetric Brain CT-to-MRI Translation with 2D Brownian Bridge Diffusion Model [3.4248731707266264]
In neuroimaging, generally, brain CT is more cost-effective and accessible than MRI.
Medical image-to-image translation (I2I) serves as a promising solution.
This study is the first to achieve high-quality 3D medical I2I based only on a 2D DM with no extra architectural models.
arXiv Detail & Related papers (2024-07-06T12:13:36Z) - Self-calibrated convolution towards glioma segmentation [45.74830585715129]
We evaluate self-calibrated convolutions in different parts of the nnU-Net network to demonstrate that self-calibrated modules in skip connections can significantly improve the enhanced-tumor and tumor-core segmentation accuracy.
arXiv Detail & Related papers (2024-02-07T19:51:13Z) - Image Synthesis-based Late Stage Cancer Augmentation and Semi-Supervised
Segmentation for MRI Rectal Cancer Staging [9.992841347751332]
The aim of this study is to segment the mesorectum, rectum, and rectal cancer region so that the system can predict T-stage from segmentation results.
In the ablation studies, our semi-supervised learning approach with the T-staging loss improved specificity by 0.13.
arXiv Detail & Related papers (2023-12-08T01:36:24Z) - Glioblastoma Tumor Segmentation using an Ensemble of Vision Transformers [0.0]
Glioblastoma is one of the most aggressive and deadliest types of brain cancer.
Brain Radiology Aided by Intelligent Neural NETworks (BRAINNET) generates robust tumor segmentation maks.
arXiv Detail & Related papers (2023-11-09T18:55:27Z) - Learned Local Attention Maps for Synthesising Vessel Segmentations [43.314353195417326]
We present an encoder-decoder model for synthesising segmentations of the main cerebral arteries in the circle of Willis (CoW) from only T2 MRI.
It uses learned local attention maps generated by dilating the segmentation labels, which forces the network to only extract information from the T2 MRI relevant to synthesising the CoW.
arXiv Detail & Related papers (2023-08-24T15:32:27Z) - 3DSAM-adapter: Holistic adaptation of SAM from 2D to 3D for promptable tumor segmentation [52.699139151447945]
We propose a novel adaptation method for transferring the segment anything model (SAM) from 2D to 3D for promptable medical image segmentation.
Our model can outperform domain state-of-the-art medical image segmentation models on 3 out of 4 tasks, specifically by 8.25%, 29.87%, and 10.11% for kidney tumor, pancreas tumor, colon cancer segmentation, and achieve similar performance for liver tumor segmentation.
arXiv Detail & Related papers (2023-06-23T12:09:52Z) - View-Disentangled Transformer for Brain Lesion Detection [50.4918615815066]
We propose a novel view-disentangled transformer to enhance the extraction of MRI features for more accurate tumour detection.
First, the proposed transformer harvests long-range correlation among different positions in a 3D brain scan.
Second, the transformer models a stack of slice features as multiple 2D views and enhance these features view-by-view.
Third, we deploy the proposed transformer module in a transformer backbone, which can effectively detect the 2D regions surrounding brain lesions.
arXiv Detail & Related papers (2022-09-20T11:58:23Z) - Perfusion imaging in deep prostate cancer detection from mp-MRI: can we
take advantage of it? [0.0]
We evaluate strategies to integrate information from perfusion imaging in deep neural architectures.
Perfusion maps from dynamic contrast enhanced MR exams are shown to positively impact segmentation and grading performance of PCa lesions.
arXiv Detail & Related papers (2022-07-06T07:55:46Z) - Contrast-enhanced MRI Synthesis Using 3D High-Resolution ConvNets [7.892005877717236]
Gadolinium-based contrast agents (GBCAs) have been widely used to better visualize disease in brain magnetic resonance imaging (MRI)
For brain tumor patients, standard-of-care includes repeated MRI with gadolinium-based contrast for disease monitoring, increasing the risk of gadolinium deposition.
We present a deep learning based approach for contrast-enhanced T1 synthesis on brain tumor patients.
arXiv Detail & Related papers (2021-04-04T11:54:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.