A Novel Confidence Induced Class Activation Mapping for MRI Brain Tumor
Segmentation
- URL: http://arxiv.org/abs/2306.05476v3
- Date: Mon, 30 Oct 2023 06:45:01 GMT
- Title: A Novel Confidence Induced Class Activation Mapping for MRI Brain Tumor
Segmentation
- Authors: Yu-Jen Chen, Yiyu Shi, Tsung-Yi Ho
- Abstract summary: We propose the confidence-induced CAM (Cfd-CAM) for weakly-supervised semantic segmentation.
Cfd-CAM calculates the weight of each feature map by using the confidence of the target class.
Our experiments on two brain tumor datasets show that Cfd-CAM outperforms existing state-of-the-art methods.
- Score: 19.52081109414247
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Magnetic resonance imaging (MRI) is a commonly used technique for brain tumor
segmentation, which is critical for evaluating patients and planning treatment.
To make the labeling process less laborious and dependent on expertise,
weakly-supervised semantic segmentation (WSSS) methods using class activation
mapping (CAM) have been proposed. However, current CAM-based WSSS methods
generate the object localization map using internal neural network information,
such as gradient or trainable parameters, which can lead to suboptimal
solutions. To address these issues, we propose the confidence-induced CAM
(Cfd-CAM), which calculates the weight of each feature map by using the
confidence of the target class. Our experiments on two brain tumor datasets
show that Cfd-CAM outperforms existing state-of-the-art methods under the same
level of supervision. Overall, our proposed Cfd-CAM approach improves the
accuracy of brain tumor segmentation and may provide valuable insights for
developing better WSSS methods for other medical imaging tasks.
Related papers
- Unifying Subsampling Pattern Variations for Compressed Sensing MRI with Neural Operators [72.79532467687427]
Compressed Sensing MRI reconstructs images of the body's internal anatomy from undersampled and compressed measurements.
Deep neural networks have shown great potential for reconstructing high-quality images from highly undersampled measurements.
We propose a unified model that is robust to different subsampling patterns and image resolutions in CS-MRI.
arXiv Detail & Related papers (2024-10-05T20:03:57Z) - Lumbar Spine Tumor Segmentation and Localization in T2 MRI Images Using AI [2.9746083684997418]
This study introduces a novel data augmentation technique, aimed at automating spine tumor segmentation and localization through AI approaches.
A Convolutional Neural Network (CNN) architecture is employed for tumor classification. 3D vertebral segmentation and labeling techniques are used to help pinpoint the exact location of the tumors in the lumbar spine.
Results indicate a remarkable performance, with 99% accuracy for tumor segmentation, 98% accuracy for tumor classification, and 99% accuracy for tumor localization achieved with the proposed approach.
arXiv Detail & Related papers (2024-05-07T05:55:50Z) - fMRI-PTE: A Large-scale fMRI Pretrained Transformer Encoder for
Multi-Subject Brain Activity Decoding [54.17776744076334]
We propose fMRI-PTE, an innovative auto-encoder approach for fMRI pre-training.
Our approach involves transforming fMRI signals into unified 2D representations, ensuring consistency in dimensions and preserving brain activity patterns.
Our contributions encompass introducing fMRI-PTE, innovative data transformation, efficient training, a novel learning strategy, and the universal applicability of our approach.
arXiv Detail & Related papers (2023-11-01T07:24:22Z) - AME-CAM: Attentive Multiple-Exit CAM for Weakly Supervised Segmentation
on MRI Brain Tumor [20.70840352243769]
We propose a novel CAM method, Attentive Multiple-Exit CAM (AME-CAM), that extracts activation maps from multiple resolutions to hierarchically aggregate and improve prediction accuracy.
We evaluate our method on the BraTS 2021 dataset and show that it outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-06-26T08:24:37Z) - Live image-based neurosurgical guidance and roadmap generation using
unsupervised embedding [53.992124594124896]
We present a method for live image-only guidance leveraging a large data set of annotated neurosurgical videos.
A generated roadmap encodes the common anatomical paths taken in surgeries in the training set.
We trained and evaluated the proposed method with a data set of 166 transsphenoidal adenomectomy procedures.
arXiv Detail & Related papers (2023-03-31T12:52:24Z) - Attentive Symmetric Autoencoder for Brain MRI Segmentation [56.02577247523737]
We propose a novel Attentive Symmetric Auto-encoder based on Vision Transformer (ViT) for 3D brain MRI segmentation tasks.
In the pre-training stage, the proposed auto-encoder pays more attention to reconstruct the informative patches according to the gradient metrics.
Experimental results show that our proposed attentive symmetric auto-encoder outperforms the state-of-the-art self-supervised learning methods and medical image segmentation models.
arXiv Detail & Related papers (2022-09-19T09:43:19Z) - Category Guided Attention Network for Brain Tumor Segmentation in MRI [6.685945448824158]
We propose a novel segmentation network named Category Guided Attention U-Net (CGA U-Net)
In this model, we design a Supervised Attention Module (SAM) based on the attention mechanism, which can capture more accurate and stable long-range dependency in feature maps without introducing much computational cost.
Experimental results on the BraTS 2019 datasets show that the proposed method outperformers the state-of-the-art algorithms in both segmentation performance and computational complexity.
arXiv Detail & Related papers (2022-03-29T09:22:29Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - Computational Intelligence Approach to Improve the Classification
Accuracy of Brain Neoplasm in MRI Data [8.980876474818153]
This report presents two improvements for brain neoplasm detection in MRI data.
An advanced preprocessing technique is proposed to improve the area of interest in MRI data.
A hybrid technique using CNN for feature extraction followed by Support Vector Machine (SVM) for classification is also proposed.
arXiv Detail & Related papers (2021-01-24T06:45:26Z) - Explaining Clinical Decision Support Systems in Medical Imaging using
Cycle-Consistent Activation Maximization [112.2628296775395]
Clinical decision support using deep neural networks has become a topic of steadily growing interest.
clinicians are often hesitant to adopt the technology because its underlying decision-making process is considered to be intransparent and difficult to comprehend.
We propose a novel decision explanation scheme based on CycleGAN activation which generates high-quality visualizations of classifier decisions even in smaller data sets.
arXiv Detail & Related papers (2020-10-09T14:39:27Z) - Region of Interest Identification for Brain Tumors in Magnetic Resonance
Images [8.75217589103206]
We propose a fast, automated method, with light computational complexity, to find the smallest bounding box around the tumor region.
This region-of-interest can be used as a preprocessing step in training networks for subregion tumor segmentation.
The proposed method is evaluated on the BraTS 2015 dataset, and the average gained DICE score is 0.73, which is an acceptable result for this application.
arXiv Detail & Related papers (2020-02-26T14:10:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.