TraNCE: Transformative Non-linear Concept Explainer for CNNs
- URL: http://arxiv.org/abs/2503.20230v1
- Date: Wed, 26 Mar 2025 04:49:46 GMT
- Title: TraNCE: Transformative Non-linear Concept Explainer for CNNs
- Authors: Ugochukwu Ejike Akpudo, Yongsheng Gao, Jun Zhou, Andrew Lewis,
- Abstract summary: Convolutional neural networks (CNNs) have succeeded remarkably in various computer vision tasks.<n>CNNs are not intrinsically explainable but concept-based explainability methods provide insights into what the models saw.<n>This study presents three original contributions to the CNN explainability literature.
- Score: 18.158983987563623
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Convolutional neural networks (CNNs) have succeeded remarkably in various computer vision tasks. However, they are not intrinsically explainable. While the feature-level understanding of CNNs reveals where the models looked, concept-based explainability methods provide insights into what the models saw. However, their assumption of linear reconstructability of image activations fails to capture the intricate relationships within these activations. Their Fidelity-only approach to evaluating global explanations also presents a new concern. For the first time, we address these limitations with the novel Transformative Nonlinear Concept Explainer (TraNCE) for CNNs. Unlike linear reconstruction assumptions made by existing methods, TraNCE captures the intricate relationships within the activations. This study presents three original contributions to the CNN explainability literature: (i) An automatic concept discovery mechanism based on variational autoencoders (VAEs). This transformative concept discovery process enhances the identification of meaningful concepts from image activations. (ii) A visualization module that leverages the Bessel function to create a smooth transition between prototypical image pixels, revealing not only what the CNN saw but also what the CNN avoided, thereby mitigating the challenges of concept duplication as documented in previous works. (iii) A new metric, the Faith score, integrates both Coherence and Fidelity for a comprehensive evaluation of explainer faithfulness and consistency.
Related papers
- LLM-assisted Concept Discovery: Automatically Identifying and Explaining Neuron Functions [15.381209058506078]
Prior works have associated concepts with neurons based on examples of concepts or a pre-defined set of concepts.
We propose to leverage multimodal large language models for automatic and open-ended concept discovery.
We validate each concept by generating examples and counterexamples and evaluating the neuron's response on this new set of images.
arXiv Detail & Related papers (2024-06-12T18:19:37Z) - CoLa-DCE -- Concept-guided Latent Diffusion Counterfactual Explanations [2.3083192626377755]
We introduce Concept-guided Latent Diffusion Counterfactual Explanations (CoLa-DCE)
CoLa-DCE generates concept-guided counterfactuals for any classifier with a high degree of control regarding concept selection and spatial conditioning.
We demonstrate the advantages of our approach in minimality and comprehenibility across multiple image classification models and datasets.
arXiv Detail & Related papers (2024-06-03T14:27:46Z) - Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - Implicit Concept Removal of Diffusion Models [92.55152501707995]
Text-to-image (T2I) diffusion models often inadvertently generate unwanted concepts such as watermarks and unsafe images.
We present the Geom-Erasing, a novel concept removal method based on the geometric-driven control.
arXiv Detail & Related papers (2023-10-09T17:13:10Z) - Evaluating the Stability of Semantic Concept Representations in CNNs for
Robust Explainability [0.0]
This paper focuses on two stability goals when working with concept representations in computer vision CNNs.
The guiding use-case is a post-hoc explainability framework for object detection CNNs.
We propose a novel metric that considers both concept separation and consistency, and is to layer and concept representation dimensionality.
arXiv Detail & Related papers (2023-04-28T14:14:00Z) - Cross-receptive Focused Inference Network for Lightweight Image
Super-Resolution [64.25751738088015]
Transformer-based methods have shown impressive performance in single image super-resolution (SISR) tasks.
Transformers that need to incorporate contextual information to extract features dynamically are neglected.
We propose a lightweight Cross-receptive Focused Inference Network (CFIN) that consists of a cascade of CT Blocks mixed with CNN and Transformer.
arXiv Detail & Related papers (2022-07-06T16:32:29Z) - ECLAD: Extracting Concepts with Local Aggregated Descriptors [6.470466745237234]
We propose a novel method for automatic concept extraction and localization based on representations obtained through pixel-wise aggregations of CNN activation maps.
We introduce a process for the validation of concept-extraction techniques based on synthetic datasets with pixel-wise annotations of their main components.
arXiv Detail & Related papers (2022-06-09T14:25:23Z) - Prune and distill: similar reformatting of image information along rat
visual cortex and deep neural networks [61.60177890353585]
Deep convolutional neural networks (CNNs) have been shown to provide excellent models for its functional analogue in the brain, the ventral stream in visual cortex.
Here we consider some prominent statistical patterns that are known to exist in the internal representations of either CNNs or the visual cortex.
We show that CNNs and visual cortex share a similarly tight relationship between dimensionality expansion/reduction of object representations and reformatting of image information.
arXiv Detail & Related papers (2022-05-27T08:06:40Z) - Modeling Temporal Concept Receptive Field Dynamically for Untrimmed
Video Analysis [105.06166692486674]
We study temporal concept receptive field of concept-based event representation.
We introduce temporal dynamic convolution (TDC) to give stronger flexibility to concept-based event analytics.
Different coefficients can generate appropriate and accurate temporal concept receptive field size according to input videos.
arXiv Detail & Related papers (2021-11-23T04:59:48Z) - The Mind's Eye: Visualizing Class-Agnostic Features of CNNs [92.39082696657874]
We propose an approach to visually interpret CNN features given a set of images by creating corresponding images that depict the most informative features of a specific layer.
Our method uses a dual-objective activation and distance loss, without requiring a generator network nor modifications to the original model.
arXiv Detail & Related papers (2021-01-29T07:46:39Z) - Neural Networks with Recurrent Generative Feedback [61.90658210112138]
We instantiate this design on convolutional neural networks (CNNs)
In the experiments, CNN-F shows considerably improved adversarial robustness over conventional feedforward CNNs on standard benchmarks.
arXiv Detail & Related papers (2020-07-17T19:32:48Z) - Invertible Concept-based Explanations for CNN Models with Non-negative
Concept Activation Vectors [24.581839689833572]
Convolutional neural network (CNN) models for computer vision are powerful but lack explainability in their most basic form.
Recent work on explanations through feature importance of approximate linear models has moved from input-level features to features from mid-layer feature maps in the form of concept activation vectors (CAVs)
In this work, we rethink the ACE algorithm of Ghorbani etal., proposing an alternative invertible concept-based explanation (ICE) framework to overcome its shortcomings.
arXiv Detail & Related papers (2020-06-27T17:57:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.