Explainability and Robustness of Deep Visual Classification Models
- URL: http://arxiv.org/abs/2301.01343v1
- Date: Tue, 3 Jan 2023 20:23:43 GMT
- Title: Explainability and Robustness of Deep Visual Classification Models
- Authors: Jindong Gu
- Abstract summary: In the computer vision community, Convolutional Neural Networks (CNNs) have become the standard visual classification model.
As alternatives to CNNs, Capsule Networks (CapsNets) and Vision Transformers (ViTs) have been proposed.
CapsNets are considered to have more inductive bias than CNNs, whereas ViTs are considered to have less inductive bias than CNNs.
- Score: 14.975436239088312
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the computer vision community, Convolutional Neural Networks (CNNs), first
proposed in the 1980's, have become the standard visual classification model.
Recently, as alternatives to CNNs, Capsule Networks (CapsNets) and Vision
Transformers (ViTs) have been proposed. CapsNets, which were inspired by the
information processing of the human brain, are considered to have more
inductive bias than CNNs, whereas ViTs are considered to have less inductive
bias than CNNs. All three classification models have received great attention
since they can serve as backbones for various downstream tasks. However, these
models are far from being perfect. As pointed out by the community, there are
two weaknesses in standard Deep Neural Networks (DNNs). One of the limitations
of DNNs is the lack of explainability. Even though they can achieve or surpass
human expert performance in the image classification task, the DNN-based
decisions are difficult to understand. In many real-world applications,
however, individual decisions need to be explained. The other limitation of
DNNs is adversarial vulnerability. Concretely, the small and imperceptible
perturbations of inputs can mislead DNNs. The vulnerability of deep neural
networks poses challenges to current visual classification models. The
potential threats thereof can lead to unacceptable consequences. Besides,
studying model adversarial vulnerability can lead to a better understanding of
the underlying models. Our research aims to address the two limitations of
DNNs. Specifically, we focus on deep visual classification models, especially
the core building parts of each classification model, e.g. dynamic routing in
CapsNets and self-attention module in ViTs.
Related papers
- Link Stealing Attacks Against Inductive Graph Neural Networks [60.931106032824275]
A graph neural network (GNN) is a type of neural network that is specifically designed to process graph-structured data.
Previous work has shown that transductive GNNs are vulnerable to a series of privacy attacks.
This paper conducts a comprehensive privacy analysis of inductive GNNs through the lens of link stealing attacks.
arXiv Detail & Related papers (2024-05-09T14:03:52Z) - Are Deep Neural Networks Adequate Behavioural Models of Human Visual
Perception? [8.370048099732573]
Deep neural networks (DNNs) are machine learning algorithms that have revolutionised computer vision.
We argue that it is important to distinguish between statistical tools and computational models.
We dispel a number of myths surrounding DNNs in vision science.
arXiv Detail & Related papers (2023-05-26T15:31:06Z) - Models Developed for Spiking Neural Networks [0.5801044612920815]
Spiking neural networks (SNNs) have been around for a long time, and they have been investigated to understand the dynamics of the brain.
In this work, we reviewed the structures and performances of SNNs on image classification tasks.
The comparisons illustrate that these networks show great capabilities for more complicated problems.
arXiv Detail & Related papers (2022-12-08T16:18:53Z) - A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy,
Robustness, Fairness, and Explainability [59.80140875337769]
Graph Neural Networks (GNNs) have made rapid developments in the recent years.
GNNs can leak private information, are vulnerable to adversarial attacks, can inherit and magnify societal bias from training data.
This paper gives a comprehensive survey of GNNs in the computational aspects of privacy, robustness, fairness, and explainability.
arXiv Detail & Related papers (2022-04-18T21:41:07Z) - Towards Fully Interpretable Deep Neural Networks: Are We There Yet? [17.88784870849724]
Deep Neural Networks (DNNs) behave as black-boxes hindering user trust in Artificial Intelligence (AI) systems.
This paper provides a review of existing methods to develop DNNs with intrinsic interpretability.
arXiv Detail & Related papers (2021-06-24T16:37:34Z) - HufuNet: Embedding the Left Piece as Watermark and Keeping the Right
Piece for Ownership Verification in Deep Neural Networks [16.388046449021466]
We propose a novel solution for watermarking deep neural networks (DNNs)
HufuNet is highly robust against model fine-tuning/pruning, kernels cutoff/supplement, functionality-equivalent attack, and fraudulent ownership claims.
arXiv Detail & Related papers (2021-03-25T06:55:22Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - Spiking Neural Networks with Single-Spike Temporal-Coded Neurons for
Network Intrusion Detection [6.980076213134383]
Spiking neural network (SNN) is interesting due to its strong bio-plausibility and high energy efficiency.
However, its performance is falling far behind conventional deep neural networks (DNNs)
arXiv Detail & Related papers (2020-10-15T14:46:18Z) - Attentive Graph Neural Networks for Few-Shot Learning [74.01069516079379]
Graph Neural Networks (GNN) has demonstrated the superior performance in many challenging applications, including the few-shot learning tasks.
Despite its powerful capacity to learn and generalize the model from few samples, GNN usually suffers from severe over-fitting and over-smoothing as the model becomes deep.
We propose a novel Attentive GNN to tackle these challenges, by incorporating a triple-attention mechanism.
arXiv Detail & Related papers (2020-07-14T07:43:09Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - Approximation and Non-parametric Estimation of ResNet-type Convolutional
Neural Networks [52.972605601174955]
We show a ResNet-type CNN can attain the minimax optimal error rates in important function classes.
We derive approximation and estimation error rates of the aformentioned type of CNNs for the Barron and H"older classes.
arXiv Detail & Related papers (2019-03-24T19:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.