Multi-Semantic Image Recognition Model and Evaluating Index for
explaining the deep learning models
- URL: http://arxiv.org/abs/2109.13531v1
- Date: Tue, 28 Sep 2021 07:18:05 GMT
- Title: Multi-Semantic Image Recognition Model and Evaluating Index for
explaining the deep learning models
- Authors: Qianmengke Zhao, Ye Wang, Qun Liu
- Abstract summary: We first propose a multi-semantic image recognition model, which enables human beings to understand the decision-making process of the neural network.
We then presents a new evaluation index, which can quantitatively assess the model interpretability.
This paper also exhibits the relevant baseline performance with current state-of-the-art deep learning models.
- Score: 31.387124252490377
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although deep learning models are powerful among various applications, most
deep learning models are still a black box, lacking verifiability and
interpretability, which means the decision-making process that human beings
cannot understand. Therefore, how to evaluate deep neural networks with
explanations is still an urgent task. In this paper, we first propose a
multi-semantic image recognition model, which enables human beings to
understand the decision-making process of the neural network. Then, we presents
a new evaluation index, which can quantitatively assess the model
interpretability. We also comprehensively summarize the semantic information
that affects the image classification results in the judgment process of neural
networks. Finally, this paper also exhibits the relevant baseline performance
with current state-of-the-art deep learning models.
Related papers
- Reinforcing Pre-trained Models Using Counterfactual Images [54.26310919385808]
This paper proposes a novel framework to reinforce classification models using language-guided generated counterfactual images.
We identify model weaknesses by testing the model using the counterfactual image dataset.
We employ the counterfactual images as an augmented dataset to fine-tune and reinforce the classification model.
arXiv Detail & Related papers (2024-06-19T08:07:14Z) - Towards Scalable and Versatile Weight Space Learning [51.78426981947659]
This paper introduces the SANE approach to weight-space learning.
Our method extends the idea of hyper-representations towards sequential processing of subsets of neural network weights.
arXiv Detail & Related papers (2024-06-14T13:12:07Z) - Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - Image classification network enhancement methods based on knowledge
injection [8.885876832491917]
This paper proposes a multi-level hierarchical deep learning algorithm.
It is composed of multi-level hierarchical deep neural network architecture and multi-level hierarchical deep learning framework.
The experimental results show that the proposed algorithm can effectively explain the hidden information of the neural network.
arXiv Detail & Related papers (2024-01-09T09:11:41Z) - Seeing in Words: Learning to Classify through Language Bottlenecks [59.97827889540685]
Humans can explain their predictions using succinct and intuitive descriptions.
We show that a vision model whose feature representations are text can effectively classify ImageNet images.
arXiv Detail & Related papers (2023-06-29T00:24:42Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Adversarial Attacks on the Interpretation of Neuron Activation
Maximization [70.5472799454224]
Activation-maximization approaches are used to interpret and analyze trained deep-learning models.
In this work, we consider the concept of an adversary manipulating a model for the purpose of deceiving the interpretation.
arXiv Detail & Related papers (2023-06-12T19:54:33Z) - On Modifying a Neural Network's Perception [3.42658286826597]
We propose a method which allows one to modify what an artificial neural network is perceiving regarding specific human-defined concepts.
We test the proposed method on different models, assessing whether the performed manipulations are well interpreted by the models, and analyzing how they react to them.
arXiv Detail & Related papers (2023-03-05T12:09:37Z) - Improving a neural network model by explanation-guided training for
glioma classification based on MRI data [0.0]
Interpretability methods have become a popular way to gain insight into the decision-making process of deep learning models.
We propose a method for explanation-guided training that uses a Layer-wise relevance propagation (LRP) technique.
We experimentally verified our method on a convolutional neural network (CNN) model for low-grade and high-grade glioma classification problems.
arXiv Detail & Related papers (2021-07-05T13:27:28Z) - Human-Understandable Decision Making for Visual Recognition [30.30163407674527]
We propose a new framework to train a deep neural network by incorporating the prior of human perception into the model learning process.
The effectiveness of our proposed model is evaluated on two classical visual recognition tasks.
arXiv Detail & Related papers (2021-03-05T02:07:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.