Towards the Characterization of Representations Learned via
Capsule-based Network Architectures
- URL: http://arxiv.org/abs/2305.05349v1
- Date: Tue, 9 May 2023 11:20:11 GMT
- Title: Towards the Characterization of Representations Learned via
Capsule-based Network Architectures
- Authors: Saja AL-Tawalbeh and Jos\'e Oramas
- Abstract summary: Capsule Networks (CapsNets) have been re-introduced as a more compact and interpretable alternative to standard deep neural networks.
Here, we conduct a systematic and principled study towards assessing the interpretability of these types of networks.
Our analysis in the MNIST, SVHN, PASCAL-part and CelebA datasets suggest that the representations encoded in CapsNets might not be as disentangled nor strictly related to parts-whole relationships as is commonly stated in the literature.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Capsule Networks (CapsNets) have been re-introduced as a more compact and
interpretable alternative to standard deep neural networks. While recent
efforts have proved their compression capabilities, to date, their
interpretability properties have not been fully assessed. Here, we conduct a
systematic and principled study towards assessing the interpretability of these
types of networks. Moreover, we pay special attention towards analyzing the
level to which part-whole relationships are indeed encoded within the learned
representation. Our analysis in the MNIST, SVHN, PASCAL-part and CelebA
datasets suggest that the representations encoded in CapsNets might not be as
disentangled nor strictly related to parts-whole relationships as is commonly
stated in the literature.
Related papers
- Relational Composition in Neural Networks: A Survey and Call to Action [54.47858085003077]
Many neural nets appear to represent data as linear combinations of "feature vectors"
We argue that this success is incomplete without an understanding of relational composition.
arXiv Detail & Related papers (2024-07-19T20:50:57Z) - Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Generalization analysis of an unfolding network for analysis-based
Compressed Sensing [27.53377180094267]
Unfolding networks have shown promising results in the Compressed Sensing (CS) field.
In this paper, we perform generalization analysis of a state-of-the-art ADMM-based unfolding network.
arXiv Detail & Related papers (2023-03-09T21:13:32Z) - Rank Diminishing in Deep Neural Networks [71.03777954670323]
Rank of neural networks measures information flowing across layers.
It is an instance of a key structural condition that applies across broad domains of machine learning.
For neural networks, however, the intrinsic mechanism that yields low-rank structures remains vague and unclear.
arXiv Detail & Related papers (2022-06-13T12:03:32Z) - DECONET: an Unfolding Network for Analysis-based Compressed Sensing with
Generalization Error Bounds [27.53377180094267]
We present a new deep unfolding network for analysis-sparsity-based Compressed Sensing.
The proposed network coined Decoding Network (DECONET) jointly learns a decoder that reconstructs vectors from their incomplete, noisy measurements.
arXiv Detail & Related papers (2022-05-14T12:50:48Z) - Interpretable part-whole hierarchies and conceptual-semantic
relationships in neural networks [4.153804257347222]
We present Agglomerator, a framework capable of providing a representation of part-whole hierarchies from visual cues.
We evaluate our method on common datasets, such as SmallNORB, MNIST, FashionMNIST, CIFAR-10, and CIFAR-100.
arXiv Detail & Related papers (2022-03-07T10:56:13Z) - ADMM-DAD net: a deep unfolding network for analysis compressed sensing [20.88999913266683]
We propose a new deep unfolding neural network based on the ADMM algorithm for analysis Compressed Sensing.
The proposed network jointly learns a redundant analysis operator for sparsification and reconstructs the signal of interest.
arXiv Detail & Related papers (2021-10-13T18:56:59Z) - Discovering "Semantics" in Super-Resolution Networks [54.45509260681529]
Super-resolution (SR) is a fundamental and representative task of low-level vision area.
It is generally thought that the features extracted from the SR network have no specific semantic information.
Can we find any "semantics" in SR networks?
arXiv Detail & Related papers (2021-08-01T09:12:44Z) - SNoRe: Scalable Unsupervised Learning of Symbolic Node Representations [0.0]
The proposed SNoRe algorithm is capable of learning symbolic, human-understandable representations of individual network nodes.
SNoRe's interpretable features are suitable for direct explanation of individual predictions.
The vectorized implementation of SNoRe scales to large networks, making it suitable for contemporary network learning and analysis tasks.
arXiv Detail & Related papers (2020-09-08T08:13:21Z) - Understanding Generalization in Deep Learning via Tensor Methods [53.808840694241]
We advance the understanding of the relations between the network's architecture and its generalizability from the compression perspective.
We propose a series of intuitive, data-dependent and easily-measurable properties that tightly characterize the compressibility and generalizability of neural networks.
arXiv Detail & Related papers (2020-01-14T22:26:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.