Prototype Generation: Robust Feature Visualisation for Data Independent
Interpretability
- URL: http://arxiv.org/abs/2309.17144v1
- Date: Fri, 29 Sep 2023 11:16:06 GMT
- Title: Prototype Generation: Robust Feature Visualisation for Data Independent
Interpretability
- Authors: Arush Tagade, Jessica Rumbelow
- Abstract summary: Prototype Generation is a stricter and more robust form of feature visualisation for model-agnostic, data-independent interpretability of image classification models.
We demonstrate its ability to generate inputs that result in natural activation paths, countering previous claims that feature visualisation algorithms are untrustworthy due to the unnatural internal activations.
- Score: 1.223779595809275
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We introduce Prototype Generation, a stricter and more robust form of feature
visualisation for model-agnostic, data-independent interpretability of image
classification models. We demonstrate its ability to generate inputs that
result in natural activation paths, countering previous claims that feature
visualisation algorithms are untrustworthy due to the unnatural internal
activations. We substantiate these claims by quantitatively measuring
similarity between the internal activations of our generated prototypes and
natural images. We also demonstrate how the interpretation of generated
prototypes yields important insights, highlighting spurious correlations and
biases learned by models which quantitative methods over test-sets cannot
identify.
Related papers
- Unsupervised Model Diagnosis [49.36194740479798]
This paper proposes Unsupervised Model Diagnosis (UMO) to produce semantic counterfactual explanations without any user guidance.
Our approach identifies and visualizes changes in semantics, and then matches these changes to attributes from wide-ranging text sources.
arXiv Detail & Related papers (2024-10-08T17:59:03Z) - ProtoP-OD: Explainable Object Detection with Prototypical Parts [0.0]
This paper introduces an extension to detection transformers that constructs prototypical local features and uses them in object detection.
The proposed extension consists of a bottleneck module, the prototype neck, that computes a discretized representation of prototype activations.
arXiv Detail & Related papers (2024-02-29T13:25:15Z) - Revealing Multimodal Contrastive Representation Learning through Latent
Partial Causal Models [85.67870425656368]
We introduce a unified causal model specifically designed for multimodal data.
We show that multimodal contrastive representation learning excels at identifying latent coupled variables.
Experiments demonstrate the robustness of our findings, even when the assumptions are violated.
arXiv Detail & Related papers (2024-02-09T07:18:06Z) - Anomaly Score: Evaluating Generative Models and Individual Generated Images based on Complexity and Vulnerability [21.355484227864466]
We investigate the relationship between the representation space and input space around generated images.
We introduce a new metric to evaluating image-generative models called anomaly score (AS)
arXiv Detail & Related papers (2023-12-17T07:33:06Z) - Detecting Spurious Correlations via Robust Visual Concepts in Real and
AI-Generated Image Classification [12.992095539058022]
We introduce a general-purpose method that efficiently detects potential spurious correlations.
The proposed method provides intuitive explanations while eliminating the need for pixel-level annotations.
Our method is also suitable for detecting spurious correlations that may propagate to downstream applications originating from generative models.
arXiv Detail & Related papers (2023-11-03T01:12:35Z) - Provable Robustness for Streaming Models with a Sliding Window [51.85182389861261]
In deep learning applications such as online content recommendation and stock market analysis, models use historical data to make predictions.
We derive robustness certificates for models that use a fixed-size sliding window over the input stream.
Our guarantees hold for the average model performance across the entire stream and are independent of stream size, making them suitable for large data streams.
arXiv Detail & Related papers (2023-03-28T21:02:35Z) - MAUVE Scores for Generative Models: Theory and Practice [95.86006777961182]
We present MAUVE, a family of comparison measures between pairs of distributions such as those encountered in the generative modeling of text or images.
We find that MAUVE can quantify the gaps between the distributions of human-written text and those of modern neural language models.
We demonstrate in the vision domain that MAUVE can identify known properties of generated images on par with or better than existing metrics.
arXiv Detail & Related papers (2022-12-30T07:37:40Z) - Attribute Graphs Underlying Molecular Generative Models: Path to Learning with Limited Data [42.517927809224275]
We provide an algorithm that relies on perturbation experiments on latent codes of a pre-trained generative autoencoder to uncover an attribute graph.
We show that one can fit an effective graphical model that models a structural equation model between latent codes.
Using a pre-trained generative autoencoder trained on a large dataset of small molecules, we demonstrate that the graphical model can be used to predict a specific property.
arXiv Detail & Related papers (2022-07-14T19:20:30Z) - Towards Creativity Characterization of Generative Models via Group-based
Subset Scanning [64.6217849133164]
We propose group-based subset scanning to identify, quantify, and characterize creative processes.
We find that creative samples generate larger subsets of anomalies than normal or non-creative samples across datasets.
arXiv Detail & Related papers (2022-03-01T15:07:14Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.