Visual Concept Connectome (VCC): Open World Concept Discovery and their Interlayer Connections in Deep Models
- URL: http://arxiv.org/abs/2404.02233v2
- Date: Wed, 10 Apr 2024 15:22:05 GMT
- Title: Visual Concept Connectome (VCC): Open World Concept Discovery and their Interlayer Connections in Deep Models
- Authors: Matthew Kowal, Richard P. Wildes, Konstantinos G. Derpanis,
- Abstract summary: Visual Concept Connectome (VCC) discovers human interpretable concepts and their interlayer connections in a fully unsupervised manner.
Our approach simultaneously reveals fine-grained concepts at a layer, connection weightings across all layers and is amendable to global analysis of network structure.
- Score: 21.245185285617698
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Understanding what deep network models capture in their learned representations is a fundamental challenge in computer vision. We present a new methodology to understanding such vision models, the Visual Concept Connectome (VCC), which discovers human interpretable concepts and their interlayer connections in a fully unsupervised manner. Our approach simultaneously reveals fine-grained concepts at a layer, connection weightings across all layers and is amendable to global analysis of network structure (e.g., branching pattern of hierarchical concept assemblies). Previous work yielded ways to extract interpretable concepts from single layers and examine their impact on classification, but did not afford multilayer concept analysis across an entire network architecture. Quantitative and qualitative empirical results show the effectiveness of VCCs in the domain of image classification. Also, we leverage VCCs for the application of failure mode debugging to reveal where mistakes arise in deep networks.
Related papers
- Exploiting Interpretable Capabilities with Concept-Enhanced Diffusion and Prototype Networks [8.391254800873599]
We create concept-enriched models that incorporate concept information into existing architectures.
In particular, we propose Concept-Guided Diffusion Conditional, which can generate visual representations of concepts, and Concept-Guided Prototype Networks, which can create a concept prototype dataset and leverage it to perform interpretable concept prediction.
These results open up new lines of research by exploiting pre-existing information in the quest for rendering machine learning more human-understandable.
arXiv Detail & Related papers (2024-10-24T13:07:56Z) - A Self-explaining Neural Architecture for Generalizable Concept Learning [29.932706137805713]
We show that present SOTA concept learning approaches suffer from two major problems - lack of concept fidelity and limited concept interoperability.
We propose a novel self-explaining architecture for concept learning across domains.
We demonstrate the efficacy of our proposed approach over current SOTA concept learning approaches on four widely used real-world datasets.
arXiv Detail & Related papers (2024-05-01T06:50:18Z) - Advancing Ante-Hoc Explainable Models through Generative Adversarial Networks [24.45212348373868]
This paper presents a novel concept learning framework for enhancing model interpretability and performance in visual classification tasks.
Our approach appends an unsupervised explanation generator to the primary classifier network and makes use of adversarial training.
This work presents a significant step towards building inherently interpretable deep vision models with task-aligned concept representations.
arXiv Detail & Related papers (2024-01-09T16:16:16Z) - Shap-CAM: Visual Explanations for Convolutional Neural Networks based on
Shapley Value [86.69600830581912]
We develop a novel visual explanation method called Shap-CAM based on class activation mapping.
We demonstrate that Shap-CAM achieves better visual performance and fairness for interpreting the decision making process.
arXiv Detail & Related papers (2022-08-07T00:59:23Z) - Learning with Capsules: A Survey [73.31150426300198]
Capsule networks were proposed as an alternative approach to Convolutional Neural Networks (CNNs) for learning object-centric representations.
Unlike CNNs, capsule networks are designed to explicitly model part-whole hierarchical relationships.
arXiv Detail & Related papers (2022-06-06T15:05:36Z) - Unsupervised Learning of Compositional Energy Concepts [70.11673173291426]
We propose COMET, which discovers and represents concepts as separate energy functions.
Comet represents both global concepts as well as objects under a unified framework.
arXiv Detail & Related papers (2021-11-04T17:46:12Z) - Understanding the Role of Individual Units in a Deep Neural Network [85.23117441162772]
We present an analytic framework to systematically identify hidden units within image classification and image generation networks.
First, we analyze a convolutional neural network (CNN) trained on scene classification and discover units that match a diverse set of object concepts.
Second, we use a similar analytic method to analyze a generative adversarial network (GAN) model trained to generate scenes.
arXiv Detail & Related papers (2020-09-10T17:59:10Z) - Visual Concept Reasoning Networks [93.99840807973546]
A split-transform-merge strategy has been broadly used as an architectural constraint in convolutional neural networks for visual recognition tasks.
We propose to exploit this strategy and combine it with our Visual Concept Reasoning Networks (VCRNet) to enable reasoning between high-level visual concepts.
Our proposed model, VCRNet, consistently improves the performance by increasing the number of parameters by less than 1%.
arXiv Detail & Related papers (2020-08-26T20:02:40Z) - Concept Learners for Few-Shot Learning [76.08585517480807]
We propose COMET, a meta-learning method that improves generalization ability by learning to learn along human-interpretable concept dimensions.
We evaluate our model on few-shot tasks from diverse domains, including fine-grained image classification, document categorization and cell type annotation.
arXiv Detail & Related papers (2020-07-14T22:04:17Z) - CHAIN: Concept-harmonized Hierarchical Inference Interpretation of Deep
Convolutional Neural Networks [25.112903533844296]
The Concept-harmonized HierArchical INference (CHAIN) is proposed to interpret the net decision-making process.
For net-decisions being interpreted, the proposed method presents the CHAIN interpretation in which the net decision can be hierarchically deduced.
In quantitative and qualitative experiments, we demonstrate the effectiveness of CHAIN at the instance and class levels.
arXiv Detail & Related papers (2020-02-05T06:45:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.