Perception Visualization: Seeing Through the Eyes of a DNN
- URL: http://arxiv.org/abs/2204.09920v1
- Date: Thu, 21 Apr 2022 07:18:55 GMT
- Title: Perception Visualization: Seeing Through the Eyes of a DNN
- Authors: Loris Giulivi, Mark James Carman, Giacomo Boracchi
- Abstract summary: We develop a new form of explanation that is radically different in nature from current explanation methods, such as Grad-CAM.
Perception visualization provides a visual representation of what the DNN perceives in the input image by depicting what visual patterns the latent representation corresponds to.
Results of our user study demonstrate that humans can better understand and predict the system's decisions when perception visualizations are available.
- Score: 5.9557391359320375
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Artificial intelligence (AI) systems power the world we live in. Deep neural
networks (DNNs) are able to solve tasks in an ever-expanding landscape of
scenarios, but our eagerness to apply these powerful models leads us to focus
on their performance and deprioritises our ability to understand them. Current
research in the field of explainable AI tries to bridge this gap by developing
various perturbation or gradient-based explanation techniques. For images,
these techniques fail to fully capture and convey the semantic information
needed to elucidate why the model makes the predictions it does. In this work,
we develop a new form of explanation that is radically different in nature from
current explanation methods, such as Grad-CAM. Perception visualization
provides a visual representation of what the DNN perceives in the input image
by depicting what visual patterns the latent representation corresponds to.
Visualizations are obtained through a reconstruction model that inverts the
encoded features, such that the parameters and predictions of the original
models are not modified. Results of our user study demonstrate that humans can
better understand and predict the system's decisions when perception
visualizations are available, thus easing the debugging and deployment of deep
models as trusted systems.
Related papers
- Visual Analysis of Prediction Uncertainty in Neural Networks for Deep Image Synthesis [3.09988520562118]
It is imperative to comprehend the quality, confidence, robustness, and uncertainty associated with their prediction.
A thorough understanding of these quantities produces actionable insights that help application scientists make informed decisions.
This contribution demonstrates how the prediction uncertainty and sensitivity of DNNs can be estimated efficiently using various methods.
arXiv Detail & Related papers (2024-05-22T20:01:31Z) - Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - Seeing in Words: Learning to Classify through Language Bottlenecks [59.97827889540685]
Humans can explain their predictions using succinct and intuitive descriptions.
We show that a vision model whose feature representations are text can effectively classify ImageNet images.
arXiv Detail & Related papers (2023-06-29T00:24:42Z) - InDL: A New Dataset and Benchmark for In-Diagram Logic Interpretation
based on Visual Illusion [1.7980584146314789]
This paper introduces a novel approach to evaluating deep learning models' capacity for in-diagram logic interpretation.
We establish a unique dataset, InDL, designed to rigorously test and benchmark these models.
We utilize six classic geometric optical illusions to create a comparative framework between human and machine visual perception.
arXiv Detail & Related papers (2023-05-28T13:01:32Z) - Neural Implicit Representations for Physical Parameter Inference from a Single Video [49.766574469284485]
We propose to combine neural implicit representations for appearance modeling with neural ordinary differential equations (ODEs) for modelling physical phenomena.
Our proposed model combines several unique advantages: (i) Contrary to existing approaches that require large training datasets, we are able to identify physical parameters from only a single video.
The use of neural implicit representations enables the processing of high-resolution videos and the synthesis of photo-realistic images.
arXiv Detail & Related papers (2022-04-29T11:55:35Z) - Deep Reinforcement Learning Models Predict Visual Responses in the
Brain: A Preliminary Result [1.0323063834827415]
We use reinforcement learning to train neural network models to play a 3D computer game.
We find that these reinforcement learning models achieve neural response prediction accuracy scores in the early visual areas.
In contrast, the supervised neural network models yield better neural response predictions in the higher visual areas.
arXiv Detail & Related papers (2021-06-18T13:10:06Z) - Towards interpreting computer vision based on transformation invariant
optimization [10.820985444099536]
In this work, visualized images that can activate the neural network to the target classes are generated by back-propagation method.
We show some cases that such method can help us to gain insight into neural networks.
arXiv Detail & Related papers (2021-06-18T08:04:10Z) - What Do Deep Nets Learn? Class-wise Patterns Revealed in the Input Space [88.37185513453758]
We propose a method to visualize and understand the class-wise knowledge learned by deep neural networks (DNNs) under different settings.
Our method searches for a single predictive pattern in the pixel space to represent the knowledge learned by the model for each class.
In the adversarial setting, we show that adversarially trained models tend to learn more simplified shape patterns.
arXiv Detail & Related papers (2021-01-18T06:38:41Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z) - Unsupervised Discovery of Disentangled Manifolds in GANs [74.24771216154105]
Interpretable generation process is beneficial to various image editing applications.
We propose a framework to discover interpretable directions in the latent space given arbitrary pre-trained generative adversarial networks.
arXiv Detail & Related papers (2020-11-24T02:18:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.