Interpretable Graph Capsule Networks for Object Recognition
- URL: http://arxiv.org/abs/2012.01674v3
- Date: Sun, 7 Mar 2021 16:50:54 GMT
- Title: Interpretable Graph Capsule Networks for Object Recognition
- Authors: Jindong Gu and Volker Tresp
- Abstract summary: We propose interpretable Graph Capsule Networks (GraCapsNets), where we replace the routing part with a multi-head attention-based Graph Pooling approach.
GraCapsNets achieve better classification performance with fewer parameters and better adversarial robustness, when compared to CapsNets.
- Score: 17.62514568986647
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Capsule Networks, as alternatives to Convolutional Neural Networks, have been
proposed to recognize objects from images. The current literature demonstrates
many advantages of CapsNets over CNNs. However, how to create explanations for
individual classifications of CapsNets has not been well explored. The widely
used saliency methods are mainly proposed for explaining CNN-based
classifications; they create saliency map explanations by combining activation
values and the corresponding gradients, e.g., Grad-CAM. These saliency methods
require a specific architecture of the underlying classifiers and cannot be
trivially applied to CapsNets due to the iterative routing mechanism therein.
To overcome the lack of interpretability, we can either propose new post-hoc
interpretation methods for CapsNets or modifying the model to have build-in
explanations. In this work, we explore the latter. Specifically, we propose
interpretable Graph Capsule Networks (GraCapsNets), where we replace the
routing part with a multi-head attention-based Graph Pooling approach. In the
proposed model, individual classification explanations can be created
effectively and efficiently. Our model also demonstrates some unexpected
benefits, even though it replaces the fundamental part of CapsNets. Our
GraCapsNets achieve better classification performance with fewer parameters and
better adversarial robustness, when compared to CapsNets. Besides, GraCapsNets
also keep other advantages of CapsNets, namely, disentangled representations
and affine transformation robustness.
Related papers
- RobCaps: Evaluating the Robustness of Capsule Networks against Affine
Transformations and Adversarial Attacks [11.302789770501303]
Capsule Networks (CapsNets) are able to hierarchically preserve the pose relationships between multiple objects for image classification tasks.
In this paper, we evaluate different factors affecting the robustness of CapsNets, compared to traditional Conal Neural Networks (CNNs)
arXiv Detail & Related papers (2023-04-08T09:58:35Z) - MogaNet: Multi-order Gated Aggregation Network [64.16774341908365]
We propose a new family of modern ConvNets, dubbed MogaNet, for discriminative visual representation learning.
MogaNet encapsulates conceptually simple yet effective convolutions and gated aggregation into a compact module.
MogaNet exhibits great scalability, impressive efficiency of parameters, and competitive performance compared to state-of-the-art ViTs and ConvNets on ImageNet.
arXiv Detail & Related papers (2022-11-07T04:31:17Z) - CapsNet for Medical Image Segmentation [8.612958742534673]
Convolutional Neural Networks (CNNs) have been successful in solving tasks in computer vision.
CNNs are sensitive to rotation and affine transformation and their success relies on huge-scale labeled datasets.
CapsNet is a new architecture that has achieved better robustness in representation learning.
arXiv Detail & Related papers (2022-03-16T21:15:07Z) - Parallel Capsule Networks for Classification of White Blood Cells [1.5749416770494706]
Capsule Networks (CapsNets) is a machine learning architecture proposed to overcome some of the shortcomings of convolutional neural networks (CNNs)
We present a new architecture, parallel CapsNets, which exploits the concept of branching the network to isolate certain capsules.
arXiv Detail & Related papers (2021-08-05T14:30:44Z) - Capsule Network is Not More Robust than Convolutional Network [21.55939814377377]
We study the special designs in CapsNet that differ from that of a ConvNet commonly used for image classification.
The study reveals that some designs, which are thought critical to CapsNet, actually can harm its robustness.
We propose enhanced ConvNets simply by introducing the essential components behind the CapsNet's success.
arXiv Detail & Related papers (2021-03-29T09:47:00Z) - The Mind's Eye: Visualizing Class-Agnostic Features of CNNs [92.39082696657874]
We propose an approach to visually interpret CNN features given a set of images by creating corresponding images that depict the most informative features of a specific layer.
Our method uses a dual-objective activation and distance loss, without requiring a generator network nor modifications to the original model.
arXiv Detail & Related papers (2021-01-29T07:46:39Z) - Dynamic Graph: Learning Instance-aware Connectivity for Neural Networks [78.65792427542672]
Dynamic Graph Network (DG-Net) is a complete directed acyclic graph, where the nodes represent convolutional blocks and the edges represent connection paths.
Instead of using the same path of the network, DG-Net aggregates features dynamically in each node, which allows the network to have more representation ability.
arXiv Detail & Related papers (2020-10-02T16:50:26Z) - Explanation-Guided Training for Cross-Domain Few-Shot Classification [96.12873073444091]
Cross-domain few-shot classification task (CD-FSC) combines few-shot classification with the requirement to generalize across domains represented by datasets.
We introduce a novel training approach for existing FSC models.
We show that explanation-guided training effectively improves the model generalization.
arXiv Detail & Related papers (2020-07-17T07:28:08Z) - iCapsNets: Towards Interpretable Capsule Networks for Text
Classification [95.31786902390438]
Traditional machine learning methods are easy to interpret but have low accuracies.
We propose interpretable capsule networks (iCapsNets) to bridge this gap.
iCapsNets can be interpreted both locally and globally.
arXiv Detail & Related papers (2020-05-16T04:11:44Z) - EdgeNets:Edge Varying Graph Neural Networks [179.99395949679547]
This paper puts forth a general framework that unifies state-of-the-art graph neural networks (GNNs) through the concept of EdgeNet.
An EdgeNet is a GNN architecture that allows different nodes to use different parameters to weigh the information of different neighbors.
This is a general linear and local operation that a node can perform and encompasses under one formulation all existing graph convolutional neural networks (GCNNs) as well as graph attention networks (GATs)
arXiv Detail & Related papers (2020-01-21T15:51:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.