MAProtoNet: A Multi-scale Attentive Interpretable Prototypical Part Network for 3D Magnetic Resonance Imaging Brain Tumor Classification
- URL: http://arxiv.org/abs/2404.08917v1
- Date: Sat, 13 Apr 2024 07:30:17 GMT
- Title: MAProtoNet: A Multi-scale Attentive Interpretable Prototypical Part Network for 3D Magnetic Resonance Imaging Brain Tumor Classification
- Authors: Binghua Li, Jie Mao, Zhe Sun, Chao Li, Qibin Zhao, Toshihisa Tanaka,
- Abstract summary: We propose a Multi-scale Attentive Prototypical part Network, termed MAProtoNet, to provide more precise maps for attribution.
Specifically, we introduce a concise multi-scale module to merge attentive features from quadruplet attention layers, and produces attribution maps.
Compared to existing interpretable part networks in medical imaging, MAProtoNet can achieve state-of-the-art performance in localization.
- Score: 25.056170817680403
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated diagnosis with artificial intelligence has emerged as a promising area in the realm of medical imaging, while the interpretability of the introduced deep neural networks still remains an urgent concern. Although contemporary works, such as XProtoNet and MProtoNet, has sought to design interpretable prediction models for the issue, the localization precision of their resulting attribution maps can be further improved. To this end, we propose a Multi-scale Attentive Prototypical part Network, termed MAProtoNet, to provide more precise maps for attribution. Specifically, we introduce a concise multi-scale module to merge attentive features from quadruplet attention layers, and produces attribution maps. The proposed quadruplet attention layers can enhance the existing online class activation mapping loss via capturing interactions between the spatial and channel dimension, while the multi-scale module then fuses both fine-grained and coarse-grained information for precise maps generation. We also apply a novel multi-scale mapping loss for supervision on the proposed multi-scale module. Compared to existing interpretable prototypical part networks in medical imaging, MAProtoNet can achieve state-of-the-art performance in localization on brain tumor segmentation (BraTS) datasets, resulting in approximately 4% overall improvement on activation precision score (with a best score of 85.8%), without using additional annotated labels of segmentation. Our code will be released in https://github.com/TUAT-Novice/maprotonet.
Related papers
- Prototype Learning Guided Hybrid Network for Breast Tumor Segmentation in DCE-MRI [58.809276442508256]
We propose a hybrid network via the combination of convolution neural network (CNN) and transformer layers.
The experimental results on private and public DCE-MRI datasets demonstrate that the proposed hybrid network superior performance than the state-of-the-art methods.
arXiv Detail & Related papers (2024-08-11T15:46:00Z) - MProtoNet: A Case-Based Interpretable Model for Brain Tumor
Classification with 3D Multi-parametric Magnetic Resonance Imaging [0.6445605125467573]
We propose the first medical prototype network (MProtoNet) to extend ProtoPNet to brain tumor classification with 3D multi-parametric magnetic resonance imaging (mpMRI) data.
MProtoNet achieves statistically significant improvements in interpretability metrics of both correctness and localization coherence.
arXiv Detail & Related papers (2023-04-13T04:39:21Z) - M$^{2}$SNet: Multi-scale in Multi-scale Subtraction Network for Medical
Image Segmentation [73.10707675345253]
We propose a general multi-scale in multi-scale subtraction network (M$2$SNet) to finish diverse segmentation from medical image.
Our method performs favorably against most state-of-the-art methods under different evaluation metrics on eleven datasets of four different medical image segmentation tasks.
arXiv Detail & Related papers (2023-03-20T06:26:49Z) - DoubleU-NetPlus: A Novel Attention and Context Guided Dual U-Net with
Multi-Scale Residual Feature Fusion Network for Semantic Segmentation of
Medical Images [2.20200533591633]
We present a novel dual U-Net-based architecture named DoubleU-NetPlus.
We exploit multi-contextual features and several attention strategies to increase networks' ability to model discriminative feature representation.
To mitigate the gradient vanishing issue and incorporate high-resolution features with deeper spatial details, the standard convolution operation is replaced with the attention-guided residual convolution operations.
arXiv Detail & Related papers (2022-11-25T16:56:26Z) - UNet#: A UNet-like Redesigning Skip Connections for Medical Image
Segmentation [13.767615201220138]
We propose a novel network structure combining dense skip connections and full-scale skip connections, named UNet-sharp (UNet#) for its shape similar to symbol #.
The proposed UNet# can aggregate feature maps of different scales in the decoder sub-network and capture fine-grained details and coarse-grained semantics from the full scale.
arXiv Detail & Related papers (2022-05-24T03:40:48Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - CoTr: Efficiently Bridging CNN and Transformer for 3D Medical Image
Segmentation [95.51455777713092]
Convolutional neural networks (CNNs) have been the de facto standard for nowadays 3D medical image segmentation.
We propose a novel framework that efficiently bridges a bf Convolutional neural network and a bf Transformer bf (CoTr) for accurate 3D medical image segmentation.
arXiv Detail & Related papers (2021-03-04T13:34:22Z) - DT-Net: A novel network based on multi-directional integrated
convolution and threshold convolution [7.427799203626843]
We propose a novel end-to-end semantic segmentation algorithm, DT-Net.
We also use two new convolution strategies to better achieve end-to-end semantic segmentation of medical images.
arXiv Detail & Related papers (2020-09-26T11:12:06Z) - M2Net: Multi-modal Multi-channel Network for Overall Survival Time
Prediction of Brain Tumor Patients [151.4352001822956]
Early and accurate prediction of overall survival (OS) time can help to obtain better treatment planning for brain tumor patients.
Existing prediction methods rely on radiomic features at the local lesion area of a magnetic resonance (MR) volume.
We propose an end-to-end OS time prediction model; namely, Multi-modal Multi-channel Network (M2Net)
arXiv Detail & Related papers (2020-06-01T05:21:37Z) - Boundary-aware Context Neural Network for Medical Image Segmentation [15.585851505721433]
Medical image segmentation can provide reliable basis for further clinical analysis and disease diagnosis.
Most existing CNNs-based methods produce unsatisfactory segmentation mask without accurate object boundaries.
In this paper, we formulate a boundary-aware context neural network (BA-Net) for 2D medical image segmentation.
arXiv Detail & Related papers (2020-05-03T02:35:49Z) - Deep Attentive Features for Prostate Segmentation in 3D Transrectal
Ultrasound [59.105304755899034]
This paper develops a novel 3D deep neural network equipped with attention modules for better prostate segmentation in transrectal ultrasound (TRUS) images.
Our attention module utilizes the attention mechanism to selectively leverage the multilevel features integrated from different layers.
Experimental results on challenging 3D TRUS volumes show that our method attains satisfactory segmentation performance.
arXiv Detail & Related papers (2019-07-03T05:21:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.