Feature CAM: Interpretable AI in Image Classification
- URL: http://arxiv.org/abs/2403.05658v1
- Date: Fri, 8 Mar 2024 20:16:00 GMT
- Title: Feature CAM: Interpretable AI in Image Classification
- Authors: Frincy Clement, Ji Yang and Irene Cheng
- Abstract summary: There is a lack of trust to use Artificial Intelligence in critical and high-precision fields such as security, finance, health, and manufacturing industries.
We introduce a novel technique Feature CAM, which falls in the perturbation-activation combination, to create fine-grained, class-discriminative visualizations.
The resulting saliency maps proved to be 3-4 times better human interpretable than the state-of-the-art in ABM.
- Score: 2.4409988934338767
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Deep Neural Networks have often been called the black box because of the
complex, deep architecture and non-transparency presented by the inner layers.
There is a lack of trust to use Artificial Intelligence in critical and
high-precision fields such as security, finance, health, and manufacturing
industries. A lot of focused work has been done to provide interpretable
models, intending to deliver meaningful insights into the thoughts and behavior
of neural networks. In our research, we compare the state-of-the-art methods in
the Activation-based methods (ABM) for interpreting predictions of CNN models,
specifically in the application of Image Classification. We then extend the
same for eight CNN-based architectures to compare the differences in
visualization and thus interpretability. We introduced a novel technique
Feature CAM, which falls in the perturbation-activation combination, to create
fine-grained, class-discriminative visualizations. The resulting saliency maps
from our experiments proved to be 3-4 times better human interpretable than the
state-of-the-art in ABM. At the same time it reserves machine interpretability,
which is the average confidence scores in classification.
Related papers
- Hierarchical Invariance for Robust and Interpretable Vision Tasks at Larger Scales [54.78115855552886]
We show how to construct over-complete invariants with a Convolutional Neural Networks (CNN)-like hierarchical architecture.
With the over-completeness, discriminative features w.r.t. the task can be adaptively formed in a Neural Architecture Search (NAS)-like manner.
For robust and interpretable vision tasks at larger scales, hierarchical invariant representation can be considered as an effective alternative to traditional CNN and invariants.
arXiv Detail & Related papers (2024-02-23T16:50:07Z) - Hierarchical Semantic Tree Concept Whitening for Interpretable Image
Classification [19.306487616731765]
Post-hoc analysis can only discover the patterns or rules that naturally exist in models.
We proactively instill knowledge to alter the representation of human-understandable concepts in hidden layers.
Our method improves model interpretability, showing better disentanglement of semantic concepts, without negatively affecting model classification performance.
arXiv Detail & Related papers (2023-07-10T04:54:05Z) - Revealing Similar Semantics Inside CNNs: An Interpretable Concept-based
Comparison of Feature Spaces [0.0]
Safety-critical applications require transparency in artificial intelligence components.
convolutional neural networks (CNNs) widely used for perception tasks lack inherent interpretability.
We propose two methods for estimating the layer-wise similarity between semantic information inside CNN latent spaces.
arXiv Detail & Related papers (2023-04-30T13:53:39Z) - Interpretable part-whole hierarchies and conceptual-semantic
relationships in neural networks [4.153804257347222]
We present Agglomerator, a framework capable of providing a representation of part-whole hierarchies from visual cues.
We evaluate our method on common datasets, such as SmallNORB, MNIST, FashionMNIST, CIFAR-10, and CIFAR-100.
arXiv Detail & Related papers (2022-03-07T10:56:13Z) - PCACE: A Statistical Approach to Ranking Neurons for CNN
Interpretability [1.0742675209112622]
We present a new statistical method for ranking the hidden neurons in any convolutional layer of a network.
We show a real-world application of our method to air pollution prediction with street-level images.
arXiv Detail & Related papers (2021-12-31T17:54:57Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - The Mind's Eye: Visualizing Class-Agnostic Features of CNNs [92.39082696657874]
We propose an approach to visually interpret CNN features given a set of images by creating corresponding images that depict the most informative features of a specific layer.
Our method uses a dual-objective activation and distance loss, without requiring a generator network nor modifications to the original model.
arXiv Detail & Related papers (2021-01-29T07:46:39Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z) - Teaching CNNs to mimic Human Visual Cognitive Process & regularise
Texture-Shape bias [18.003188982585737]
Recent experiments in computer vision demonstrate texture bias as the primary reason for supreme results in models employing Convolutional Neural Networks (CNNs)
It is believed that the cost function forces the CNN to take a greedy approach and develop a proclivity for local information like texture to increase accuracy, thus failing to explore any global statistics.
We propose CognitiveCNN, a new intuitive architecture, inspired from feature integration theory in psychology to utilise human interpretable feature like shape, texture, edges etc. to reconstruct, and classify the image.
arXiv Detail & Related papers (2020-06-25T22:32:54Z) - Interpretation of 3D CNNs for Brain MRI Data Classification [56.895060189929055]
We extend the previous findings in gender differences from diffusion-tensor imaging on T1 brain MRI scans.
We provide the voxel-wise 3D CNN interpretation comparing the results of three interpretation methods.
arXiv Detail & Related papers (2020-06-20T17:56:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.