Towards glass-box CNNs
- URL: http://arxiv.org/abs/2101.10443v1
- Date: Mon, 11 Jan 2021 15:00:35 GMT
- Title: Towards glass-box CNNs
- Authors: Piduguralla Manaswini, Jignesh S. Bhatt
- Abstract summary: Convolution neural networks (CNNs) are brain-inspired architectures popular for their ability to train and visually relearn complex tasks.
We observe that CNN constructs powerful internal representations that help achieve state-of-the-art performance.
In future, we would like to construct glass-box CNN for multiclass visually complex tasks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Convolution neural networks (CNNs) are brain-inspired architectures popular
for their ability to train and relearn visually complex tasks. It is
incremental and scalable; however, CNN is mostly treated as black-box and
involves multiple trial & error runs. We observe that CNN constructs powerful
internal representations that help achieve state-of-the-art performance. Here
we propose three layer glass-box (analytical) CNN for two-class image
classifcation problems. First is a representation layer that encompasses both
the class information (group invariant) and symmetric transformations (group
equivariant) of input images. It is then passed through dimension reduction
layer (PCA). Finally the compact yet complete representation is provided to a
classifer. Analytical machine learning classifers and multilayer perceptrons
are used to assess sensitivity. Proposed glass-box CNN is compared with
equivariance of AlexNet (CNN) internal representation for better understanding
and dissemination of results. In future, we would like to construct glass-box
CNN for multiclass visually complex tasks.
Related papers
- OA-CNNs: Omni-Adaptive Sparse CNNs for 3D Semantic Segmentation [70.17681136234202]
We reexamine the design distinctions and test the limits of what a sparse CNN can achieve.
We propose two key components, i.e., adaptive receptive fields (spatially) and adaptive relation, to bridge the gap.
This exploration led to the creation of Omni-Adaptive 3D CNNs (OA-CNNs), a family of networks that integrates a lightweight module.
arXiv Detail & Related papers (2024-03-21T14:06:38Z) - PICNN: A Pathway towards Interpretable Convolutional Neural Networks [12.31424771480963]
We introduce a novel pathway to alleviate the entanglement between filters and image classes.
We use the Bernoulli sampling to generate the filter-cluster assignment matrix from a learnable filter-class correspondence matrix.
We evaluate the effectiveness of our method on ten widely used network architectures.
arXiv Detail & Related papers (2023-12-19T11:36:03Z) - A novel feature-scrambling approach reveals the capacity of
convolutional neural networks to learn spatial relations [0.0]
Convolutional neural networks (CNNs) are one of the most successful computer vision systems to solve object recognition.
Yet it remains poorly understood how CNNs actually make their decisions, what the nature of their internal representations is, and how their recognition strategies differ from humans.
arXiv Detail & Related papers (2022-12-12T16:40:29Z) - Deeply Explain CNN via Hierarchical Decomposition [75.01251659472584]
In computer vision, some attribution methods for explaining CNNs attempt to study how the intermediate features affect the network prediction.
This paper introduces a hierarchical decomposition framework to explain CNN's decision-making process in a top-down manner.
arXiv Detail & Related papers (2022-01-23T07:56:04Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - Contextually Guided Convolutional Neural Networks for Learning Most
Transferable Representations [1.160208922584163]
We propose an efficient algorithm for developing broad-purpose representations transferable to new tasks without additional training.
A contextually guided CNN (CG-CNN) is trained on groups of neighboring image patches picked at random image locations in the dataset.
In our application to natural images, we find that CG-CNN features show the same, if not higher, transfer utility and classification accuracy as comparable transferable features in the first CNN layer.
arXiv Detail & Related papers (2021-03-02T08:41:12Z) - The Mind's Eye: Visualizing Class-Agnostic Features of CNNs [92.39082696657874]
We propose an approach to visually interpret CNN features given a set of images by creating corresponding images that depict the most informative features of a specific layer.
Our method uses a dual-objective activation and distance loss, without requiring a generator network nor modifications to the original model.
arXiv Detail & Related papers (2021-01-29T07:46:39Z) - Decoding CNN based Object Classifier Using Visualization [6.666597301197889]
We visualize what type of features are extracted in different convolution layers of CNN.
Visualizing heat map of activation helps us to understand how CNN classifies and localizes different objects in image.
arXiv Detail & Related papers (2020-07-15T05:01:27Z) - A Systematic Evaluation: Fine-Grained CNN vs. Traditional CNN
Classifiers [54.996358399108566]
We investigate the performance of the landmark general CNN classifiers, which presented top-notch results on large scale classification datasets.
We compare it against state-of-the-art fine-grained classifiers.
We show an extensive evaluation on six datasets to determine whether the fine-grained classifier is able to elevate the baseline in their experiments.
arXiv Detail & Related papers (2020-03-24T23:49:14Z) - Approximation and Non-parametric Estimation of ResNet-type Convolutional
Neural Networks [52.972605601174955]
We show a ResNet-type CNN can attain the minimax optimal error rates in important function classes.
We derive approximation and estimation error rates of the aformentioned type of CNNs for the Barron and H"older classes.
arXiv Detail & Related papers (2019-03-24T19:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.