CHAIN: Concept-harmonized Hierarchical Inference Interpretation of Deep
Convolutional Neural Networks
- URL: http://arxiv.org/abs/2002.01660v1
- Date: Wed, 5 Feb 2020 06:45:23 GMT
- Title: CHAIN: Concept-harmonized Hierarchical Inference Interpretation of Deep
Convolutional Neural Networks
- Authors: Dan Wang, Xinrui Cui, and Z. Jane Wang
- Abstract summary: The Concept-harmonized HierArchical INference (CHAIN) is proposed to interpret the net decision-making process.
For net-decisions being interpreted, the proposed method presents the CHAIN interpretation in which the net decision can be hierarchically deduced.
In quantitative and qualitative experiments, we demonstrate the effectiveness of CHAIN at the instance and class levels.
- Score: 25.112903533844296
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the great success of networks, it witnesses the increasing demand for
the interpretation of the internal network mechanism, especially for the net
decision-making logic. To tackle the challenge, the Concept-harmonized
HierArchical INference (CHAIN) is proposed to interpret the net decision-making
process. For net-decisions being interpreted, the proposed method presents the
CHAIN interpretation in which the net decision can be hierarchically deduced
into visual concepts from high to low semantic levels. To achieve it, we
propose three models sequentially, i.e., the concept harmonizing model, the
hierarchical inference model, and the concept-harmonized hierarchical inference
model. Firstly, in the concept harmonizing model, visual concepts from high to
low semantic-levels are aligned with net-units from deep to shallow layers.
Secondly, in the hierarchical inference model, the concept in a deep layer is
disassembled into units in shallow layers. Finally, in the concept-harmonized
hierarchical inference model, a deep-layer concept is inferred from its
shallow-layer concepts. After several rounds, the concept-harmonized
hierarchical inference is conducted backward from the highest semantic level to
the lowest semantic level. Finally, net decision-making is explained as a form
of concept-harmonized hierarchical inference, which is comparable to human
decision-making. Meanwhile, the net layer structure for feature learning can be
explained based on the hierarchical visual concepts. In quantitative and
qualitative experiments, we demonstrate the effectiveness of CHAIN at the
instance and class levels.
Related papers
- A Self-explaining Neural Architecture for Generalizable Concept Learning [29.932706137805713]
We show that present SOTA concept learning approaches suffer from two major problems - lack of concept fidelity and limited concept interoperability.
We propose a novel self-explaining architecture for concept learning across domains.
We demonstrate the efficacy of our proposed approach over current SOTA concept learning approaches on four widely used real-world datasets.
arXiv Detail & Related papers (2024-05-01T06:50:18Z) - Visual Concept Connectome (VCC): Open World Concept Discovery and their Interlayer Connections in Deep Models [21.245185285617698]
Visual Concept Connectome (VCC) discovers human interpretable concepts and their interlayer connections in a fully unsupervised manner.
Our approach simultaneously reveals fine-grained concepts at a layer, connection weightings across all layers and is amendable to global analysis of network structure.
arXiv Detail & Related papers (2024-04-02T18:40:55Z) - Understanding Distributed Representations of Concepts in Deep Neural
Networks without Supervision [25.449397570387802]
We propose an unsupervised method for discovering distributed representations of concepts by selecting a principal subset of neurons.
Our empirical findings demonstrate that instances with similar neuron activation states tend to share coherent concepts.
It can be utilized to identify unlabeled subclasses within data and to detect the causes of misclassifications.
arXiv Detail & Related papers (2023-12-28T07:33:51Z) - Interpreting Pretrained Language Models via Concept Bottlenecks [55.47515772358389]
Pretrained language models (PLMs) have made significant strides in various natural language processing tasks.
The lack of interpretability due to their black-box'' nature poses challenges for responsible implementation.
We propose a novel approach to interpreting PLMs by employing high-level, meaningful concepts that are easily understandable for humans.
arXiv Detail & Related papers (2023-11-08T20:41:18Z) - A Recursive Bateson-Inspired Model for the Generation of Semantic Formal
Concepts from Spatial Sensory Data [77.34726150561087]
This paper presents a new symbolic-only method for the generation of hierarchical concept structures from complex sensory data.
The approach is based on Bateson's notion of difference as the key to the genesis of an idea or a concept.
The model is able to produce fairly rich yet human-readable conceptual representations without training.
arXiv Detail & Related papers (2023-07-16T15:59:13Z) - Hierarchical Semantic Tree Concept Whitening for Interpretable Image
Classification [19.306487616731765]
Post-hoc analysis can only discover the patterns or rules that naturally exist in models.
We proactively instill knowledge to alter the representation of human-understandable concepts in hidden layers.
Our method improves model interpretability, showing better disentanglement of semantic concepts, without negatively affecting model classification performance.
arXiv Detail & Related papers (2023-07-10T04:54:05Z) - Interpretable Neural-Symbolic Concept Reasoning [7.1904050674791185]
Concept-based models aim to address this issue by learning tasks based on a set of human-understandable concepts.
We propose the Deep Concept Reasoner (DCR), the first interpretable concept-based model that builds upon concept embeddings.
arXiv Detail & Related papers (2023-04-27T09:58:15Z) - Concept Gradient: Concept-based Interpretation Without Linear Assumption [77.96338722483226]
Concept Activation Vector (CAV) relies on learning a linear relation between some latent representation of a given model and concepts.
We proposed Concept Gradient (CG), extending concept-based interpretation beyond linear concept functions.
We demonstrated CG outperforms CAV in both toy examples and real world datasets.
arXiv Detail & Related papers (2022-08-31T17:06:46Z) - Rank Diminishing in Deep Neural Networks [71.03777954670323]
Rank of neural networks measures information flowing across layers.
It is an instance of a key structural condition that applies across broad domains of machine learning.
For neural networks, however, the intrinsic mechanism that yields low-rank structures remains vague and unclear.
arXiv Detail & Related papers (2022-06-13T12:03:32Z) - Human-Centered Concept Explanations for Neural Networks [47.71169918421306]
We introduce concept explanations including the class of Concept Activation Vectors (CAV)
We then discuss approaches to automatically extract concepts, and approaches to address some of their caveats.
Finally, we discuss some case studies that showcase the utility of such concept-based explanations in synthetic settings and real world applications.
arXiv Detail & Related papers (2022-02-25T01:27:31Z) - Developing Constrained Neural Units Over Time [81.19349325749037]
This paper focuses on an alternative way of defining Neural Networks, that is different from the majority of existing approaches.
The structure of the neural architecture is defined by means of a special class of constraints that are extended also to the interaction with data.
The proposed theory is cast into the time domain, in which data are presented to the network in an ordered manner.
arXiv Detail & Related papers (2020-09-01T09:07:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.