A Novel Explainable Artificial Intelligence Model in Image
Classification problem
- URL: http://arxiv.org/abs/2307.04137v1
- Date: Sun, 9 Jul 2023 09:33:05 GMT
- Title: A Novel Explainable Artificial Intelligence Model in Image
Classification problem
- Authors: Quoc Hung Cao, Truong Thanh Hung Nguyen, Vo Thanh Khang Nguyen, Xuan
Phong Nguyen
- Abstract summary: We propose a new method called Activation Class Mapping (SeCAM) that combines the advantages of these algorithms above.
We tested this algorithm with various models, including ResNet50, Inception-v3, VGG16 from ImageNet Large Scale Visual Recognition Challenge (ILSVRC) data set.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, artificial intelligence is increasingly being applied widely
in many different fields and has a profound and direct impact on human life.
Following this is the need to understand the principles of the model making
predictions. Since most of the current high-precision models are black boxes,
neither the AI scientist nor the end-user deeply understands what's going on
inside these models. Therefore, many algorithms are studied for the purpose of
explaining AI models, especially those in the problem of image classification
in the field of computer vision such as LIME, CAM, GradCAM. However, these
algorithms still have limitations such as LIME's long execution time and CAM's
confusing interpretation of concreteness and clarity. Therefore, in this paper,
we propose a new method called Segmentation - Class Activation Mapping (SeCAM)
that combines the advantages of these algorithms above, while at the same time
overcoming their disadvantages. We tested this algorithm with various models,
including ResNet50, Inception-v3, VGG16 from ImageNet Large Scale Visual
Recognition Challenge (ILSVRC) data set. Outstanding results when the algorithm
has met all the requirements for a specific explanation in a remarkably concise
time.
Related papers
- Zero-Shot Detection of AI-Generated Images [54.01282123570917]
We propose a zero-shot entropy-based detector (ZED) to detect AI-generated images.
Inspired by recent works on machine-generated text detection, our idea is to measure how surprising the image under analysis is compared to a model of real images.
ZED achieves an average improvement of more than 3% over the SoTA in terms of accuracy.
arXiv Detail & Related papers (2024-09-24T08:46:13Z) - A Sanity Check for AI-generated Image Detection [49.08585395873425]
We propose AIDE (AI-generated Image DEtector with Hybrid Features) to detect AI-generated images.
AIDE achieves +3.5% and +4.6% improvements to state-of-the-art methods.
arXiv Detail & Related papers (2024-06-27T17:59:49Z) - Development of a Dual-Input Neural Model for Detecting AI-Generated Imagery [0.0]
It is important to develop tools that are able to detect AI-generated images.
This paper proposes a dual-branch neural network architecture that takes both images and their Fourier frequency decomposition as inputs.
Our proposed model achieves an accuracy of 94% on the CIFAKE dataset, which significantly outperforms classic ML methods and CNNs.
arXiv Detail & Related papers (2024-06-19T16:42:04Z) - Feature CAM: Interpretable AI in Image Classification [2.4409988934338767]
There is a lack of trust to use Artificial Intelligence in critical and high-precision fields such as security, finance, health, and manufacturing industries.
We introduce a novel technique Feature CAM, which falls in the perturbation-activation combination, to create fine-grained, class-discriminative visualizations.
The resulting saliency maps proved to be 3-4 times better human interpretable than the state-of-the-art in ABM.
arXiv Detail & Related papers (2024-03-08T20:16:00Z) - Explaining Deep Face Algorithms through Visualization: A Survey [57.60696799018538]
This work undertakes a first-of-its-kind meta-analysis of explainability algorithms in the face domain.
We review existing face explainability works and reveal valuable insights into the structure and hierarchy of face networks.
arXiv Detail & Related papers (2023-09-26T07:16:39Z) - Foiling Explanations in Deep Neural Networks [0.0]
This paper uncovers a troubling property of explanation methods for image-based DNNs.
We demonstrate how explanations may be arbitrarily manipulated through the use of evolution strategies.
Our novel algorithm is successfully able to manipulate an image in a manner imperceptible to the human eye.
arXiv Detail & Related papers (2022-11-27T15:29:39Z) - Visual correspondence-based explanations improve AI robustness and
human-AI team accuracy [7.969008943697552]
We propose two novel architectures of self-interpretable image classifiers that first explain, and then predict.
Our models consistently improve (by 1 to 4 points) on out-of-distribution (OOD) datasets.
For the first time, we show that it is possible to achieve complementary human-AI team accuracy (i.e., that is higher than either AI-alone or human-alone) in ImageNet and CUB image classification tasks.
arXiv Detail & Related papers (2022-07-26T10:59:42Z) - On the Post-hoc Explainability of Deep Echo State Networks for Time
Series Forecasting, Image and Video Classification [63.716247731036745]
echo state networks have attracted many stares through time, mainly due to the simplicity and computational efficiency of their learning algorithm.
This work addresses this issue by conducting an explainability study of Echo State Networks when applied to learning tasks with time series, image and video data.
Specifically, the study proposes three different techniques capable of eliciting understandable information about the knowledge grasped by these recurrent models.
arXiv Detail & Related papers (2021-02-17T08:56:33Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z) - Learning outside the Black-Box: The pursuit of interpretable models [78.32475359554395]
This paper proposes an algorithm that produces a continuous global interpretation of any given continuous black-box function.
Our interpretation represents a leap forward from the previous state of the art.
arXiv Detail & Related papers (2020-11-17T12:39:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.