Locally Testing Model Detections for Semantic Global Concepts
- URL: http://arxiv.org/abs/2405.17523v2
- Date: Wed, 29 May 2024 07:40:40 GMT
- Title: Locally Testing Model Detections for Semantic Global Concepts
- Authors: Franz Motzkus, Georgii Mikriukov, Christian Hellert, Ute Schmid,
- Abstract summary: We propose a framework for linking global concept encodings to the local processing of single network inputs.
Our approach has the advantage of fully covering the model-internal encoding of the semantic concept.
The results show major differences in the local perception and usage of individual global concept encodings.
- Score: 3.112979958793927
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Ensuring the quality of black-box Deep Neural Networks (DNNs) has become ever more significant, especially in safety-critical domains such as automated driving. While global concept encodings generally enable a user to test a model for a specific concept, linking global concept encodings to the local processing of single network inputs reveals their strengths and limitations. Our proposed framework global-to-local Concept Attribution (glCA) uses approaches from local (why a specific prediction originates) and global (how a model works generally) eXplainable Artificial Intelligence (xAI) to test DNNs for a predefined semantical concept locally. The approach allows for conditioning local, post-hoc explanations on predefined semantic concepts encoded as linear directions in the model's latent space. Pixel-exact scoring concerning the global concept usage assists the tester in further understanding the model processing of single data points for the selected concept. Our approach has the advantage of fully covering the model-internal encoding of the semantic concept and allowing the localization of relevant concept-related information. The results show major differences in the local perception and usage of individual global concept encodings and demand for further investigations regarding obtaining thorough semantic concept encodings.
Related papers
- TIDE: Training Locally Interpretable Domain Generalization Models Enables Test-time Correction [14.396966854171273]
We consider the problem of single-source domain generalization.
Existing methods typically rely on extensive augmentations to synthetically cover diverse domains during training.
We propose an approach that compels models to leverage such local concepts during prediction.
arXiv Detail & Related papers (2024-11-25T08:46:37Z) - Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery [52.498055901649025]
Concept Bottleneck Models (CBMs) have been proposed to address the 'black-box' problem of deep neural networks.
We propose a novel CBM approach -- called Discover-then-Name-CBM (DN-CBM) -- that inverts the typical paradigm.
Our concept extraction strategy is efficient, since it is agnostic to the downstream task, and uses concepts already known to the model.
arXiv Detail & Related papers (2024-07-19T17:50:11Z) - Improving Intervention Efficacy via Concept Realignment in Concept Bottleneck Models [57.86303579812877]
Concept Bottleneck Models (CBMs) ground image classification on human-understandable concepts to allow for interpretable model decisions.
Existing approaches often require numerous human interventions per image to achieve strong performances.
We introduce a trainable concept realignment intervention module, which leverages concept relations to realign concept assignments post-intervention.
arXiv Detail & Related papers (2024-05-02T17:59:01Z) - A Self-explaining Neural Architecture for Generalizable Concept Learning [29.932706137805713]
We show that present SOTA concept learning approaches suffer from two major problems - lack of concept fidelity and limited concept interoperability.
We propose a novel self-explaining architecture for concept learning across domains.
We demonstrate the efficacy of our proposed approach over current SOTA concept learning approaches on four widely used real-world datasets.
arXiv Detail & Related papers (2024-05-01T06:50:18Z) - WWW: A Unified Framework for Explaining What, Where and Why of Neural Networks by Interpretation of Neuron Concepts [3.2627279988912194]
We propose a novel framework, WWW, that offers the 'what', 'where', and 'why' of the neural network decisions in human-understandable terms.
WWW utilizes adaptive selection for concept discovery, employing adaptive cosine similarity and thresholding techniques.
WWW provides a unified solution for explaining 'what', 'where', and 'why', introducing a method for localized explanations from global interpretations.
arXiv Detail & Related papers (2024-02-29T08:51:51Z) - Adaptive Global-Local Representation Learning and Selection for
Cross-Domain Facial Expression Recognition [54.334773598942775]
Domain shift poses a significant challenge in Cross-Domain Facial Expression Recognition (CD-FER)
We propose an Adaptive Global-Local Representation Learning and Selection framework.
arXiv Detail & Related papers (2024-01-20T02:21:41Z) - Local Concept Embeddings for Analysis of Concept Distributions in DNN Feature Spaces [1.0923877073891446]
We propose a novel concept analysis framework for deep neural networks (DNNs)
Instead of optimizing a single global concept vector on the complete dataset, it generates a local concept embedding (LoCE) vector for each individual sample.
Despite its context sensitivity, our method's concept segmentation performance is competitive to global baselines.
arXiv Detail & Related papers (2023-11-24T12:22:00Z) - Concept-Centric Transformers: Enhancing Model Interpretability through
Object-Centric Concept Learning within a Shared Global Workspace [1.6574413179773757]
Concept-Centric Transformers is a simple yet effective configuration of the shared global workspace for interpretability.
We show that our model achieves better classification accuracy than all baselines across all problems.
arXiv Detail & Related papers (2023-05-25T06:37:39Z) - Concept Activation Regions: A Generalized Framework For Concept-Based
Explanations [95.94432031144716]
Existing methods assume that the examples illustrating a concept are mapped in a fixed direction of the deep neural network's latent space.
In this work, we propose allowing concept examples to be scattered across different clusters in the DNN's latent space.
This concept activation region (CAR) formalism yields global concept-based explanations and local concept-based feature importance.
arXiv Detail & Related papers (2022-09-22T17:59:03Z) - Modeling Temporal Concept Receptive Field Dynamically for Untrimmed
Video Analysis [105.06166692486674]
We study temporal concept receptive field of concept-based event representation.
We introduce temporal dynamic convolution (TDC) to give stronger flexibility to concept-based event analytics.
Different coefficients can generate appropriate and accurate temporal concept receptive field size according to input videos.
arXiv Detail & Related papers (2021-11-23T04:59:48Z) - Unsupervised Learning of Compositional Energy Concepts [70.11673173291426]
We propose COMET, which discovers and represents concepts as separate energy functions.
Comet represents both global concepts as well as objects under a unified framework.
arXiv Detail & Related papers (2021-11-04T17:46:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.