ParseCaps: An Interpretable Parsing Capsule Network for Medical Image Diagnosis
- URL: http://arxiv.org/abs/2411.01564v1
- Date: Sun, 03 Nov 2024 13:34:31 GMT
- Title: ParseCaps: An Interpretable Parsing Capsule Network for Medical Image Diagnosis
- Authors: Xinyu Geng, Jiaming Wang, Jun Xu,
- Abstract summary: This paper introduces a novel capsule network, ParseCaps, which utilizes the sparse axial attention routing and parse convolutional capsule layer to form a parse-tree-like structure.
Experimental results on CE-MRI, PH$2$, and Derm7pt datasets show that ParseCaps not only outperforms other capsule network variants in classification accuracy, redundancy reduction and robustness, but also provides interpretable explanations.
- Score: 6.273401483558281
- License:
- Abstract: Deep learning has excelled in medical image classification, but its clinical application is limited by poor interpretability. Capsule networks, known for encoding hierarchical relationships and spatial features, show potential in addressing this issue. Nevertheless, traditional capsule networks often underperform due to their shallow structures, and deeper variants lack hierarchical architectures, thereby compromising interpretability. This paper introduces a novel capsule network, ParseCaps, which utilizes the sparse axial attention routing and parse convolutional capsule layer to form a parse-tree-like structure, enhancing both depth and interpretability. Firstly, sparse axial attention routing optimizes connections between child and parent capsules, as well as emphasizes the weight distribution across instantiation parameters of parent capsules. Secondly, the parse convolutional capsule layer generates capsule predictions aligning with the parse tree. Finally, based on the loss design that is effective whether concept ground truth exists or not, ParseCaps advances interpretability by associating each dimension of the global capsule with a comprehensible concept, thereby facilitating clinician trust and understanding of the model's classification results. Experimental results on CE-MRI, PH$^2$, and Derm7pt datasets show that ParseCaps not only outperforms other capsule network variants in classification accuracy, redundancy reduction and robustness, but also provides interpretable explanations, regardless of the availability of concept labels.
Related papers
- Hierarchical Object-Centric Learning with Capsule Networks [0.0]
Capsule networks (CapsNets) were introduced to address convolutional neural networks limitations.
This thesis investigates the intriguing aspects of CapsNets and focuses on three key questions to unlock their full potential.
arXiv Detail & Related papers (2024-05-30T09:10:33Z) - Semantic Loss Functions for Neuro-Symbolic Structured Prediction [74.18322585177832]
We discuss the semantic loss, which injects knowledge about such structure, defined symbolically, into training.
It is agnostic to the arrangement of the symbols, and depends only on the semantics expressed thereby.
It can be combined with both discriminative and generative neural models.
arXiv Detail & Related papers (2024-05-12T22:18:25Z) - Why Capsule Neural Networks Do Not Scale: Challenging the Dynamic
Parse-Tree Assumption [16.223322939363033]
Capsule neural networks replace simple, scalar-valued neurons with vector-valued capsules.
CapsNet is the first actual implementation of the conceptual idea of capsule neural networks.
No work was able to scale the CapsNet architecture to more reasonable-sized datasets.
arXiv Detail & Related papers (2023-01-04T12:59:51Z) - Learning with Capsules: A Survey [73.31150426300198]
Capsule networks were proposed as an alternative approach to Convolutional Neural Networks (CNNs) for learning object-centric representations.
Unlike CNNs, capsule networks are designed to explicitly model part-whole hierarchical relationships.
arXiv Detail & Related papers (2022-06-06T15:05:36Z) - HP-Capsule: Unsupervised Face Part Discovery by Hierarchical Parsing
Capsule Network [76.92310948325847]
We propose a Hierarchical Parsing Capsule Network (HP-Capsule) for unsupervised face subpart-part discovery.
HP-Capsule extends the application of capsule networks from digits to human faces and takes a step forward to show how the neural networks understand objects without human intervention.
arXiv Detail & Related papers (2022-03-21T01:39:41Z) - ASPCNet: A Deep Adaptive Spatial Pattern Capsule Network for
Hyperspectral Image Classification [47.541691093680406]
This paper proposes an adaptive spatial pattern capsule network (ASPCNet) architecture.
It can rotate the sampling location of convolutional kernels on the basis of an enlarged receptive field.
Experiments on three public datasets demonstrate that ASPCNet can yield competitive performance with higher accuracies than state-of-the-art methods.
arXiv Detail & Related papers (2021-04-25T07:10:55Z) - iCaps: An Interpretable Classifier via Disentangled Capsule Networks [25.052072276266912]
We propose an interpretable Capsule Network, iCaps, for image classification.
iCaps provides a prediction along with clear rationales behind it with no performance degradation.
arXiv Detail & Related papers (2020-08-20T03:44:26Z) - Linguistically Driven Graph Capsule Network for Visual Question
Reasoning [153.76012414126643]
We propose a hierarchical compositional reasoning model called the "Linguistically driven Graph Capsule Network"
The compositional process is guided by the linguistic parse tree. Specifically, we bind each capsule in the lowest layer to bridge the linguistic embedding of a single word in the original question with visual evidence.
Experiments on the CLEVR dataset, CLEVR compositional generation test, and FigureQA dataset demonstrate the effectiveness and composition generalization ability of our end-to-end model.
arXiv Detail & Related papers (2020-03-23T03:34:25Z) - Subspace Capsule Network [85.69796543499021]
SubSpace Capsule Network (SCN) exploits the idea of capsule networks to model possible variations in the appearance or implicitly defined properties of an entity.
SCN can be applied to both discriminative and generative models without incurring computational overhead compared to CNN during test time.
arXiv Detail & Related papers (2020-02-07T17:51:56Z) - Examining the Benefits of Capsule Neural Networks [9.658250977094562]
Capsule networks are a newly developed class of neural networks that potentially address some of the deficiencies with traditional convolutional neural networks.
By replacing the standard scalar activations with vectors, capsule networks aim to be the next great development for computer vision applications.
arXiv Detail & Related papers (2020-01-29T17:18:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.