Concept Embedding Analysis: A Review
- URL: http://arxiv.org/abs/2203.13909v1
- Date: Fri, 25 Mar 2022 20:57:16 GMT
- Title: Concept Embedding Analysis: A Review
- Authors: Gesina Schwalbe
- Abstract summary: Deep neural networks (DNNs) have found their way into many applications with potential impact on the safety, security, and fairness of human-machine-systems.
The research field of concept (embedding) analysis (CA) tackles this problem.
This work establishes a general definition of CA and a taxonomy for CA methods, uniting several ideas from literature.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks (DNNs) have found their way into many applications with
potential impact on the safety, security, and fairness of
human-machine-systems. Such require basic understanding and sufficient trust by
the users. This motivated the research field of explainable artificial
intelligence (XAI), i.e. finding methods for opening the "black-boxes" DNNs
represent. For the computer vision domain in specific, practical assessment of
DNNs requires a globally valid association of human interpretable concepts with
internals of the model. The research field of concept (embedding) analysis (CA)
tackles this problem: CA aims to find global, assessable associations of
humanly interpretable semantic concepts (e.g., eye, bearded) with internal
representations of a DNN. This work establishes a general definition of CA and
a taxonomy for CA methods, uniting several ideas from literature. That allows
to easily position and compare CA approaches. Guided by the defined notions,
the current state-of-the-art research regarding CA methods and interesting
applications are reviewed. More than thirty relevant methods are discussed,
compared, and categorized. Finally, for practitioners, a survey of fifteen
datasets is provided that have been used for supervised concept analysis. Open
challenges and research directions are pointed out at the end.
Related papers
- Unveiling Ontological Commitment in Multi-Modal Foundation Models [7.485653059927206]
Deep neural networks (DNNs) automatically learn rich representations of concepts and respective reasoning.
We propose a method that extracts the learned superclass hierarchy from a multimodal DNN for a given set of leaf concepts.
An initial evaluation study shows that meaningful ontological class hierarchies can be extracted from state-of-the-art foundation models.
arXiv Detail & Related papers (2024-09-25T17:24:27Z) - Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery [52.498055901649025]
Concept Bottleneck Models (CBMs) have been proposed to address the 'black-box' problem of deep neural networks.
We propose a novel CBM approach -- called Discover-then-Name-CBM (DN-CBM) -- that inverts the typical paradigm.
Our concept extraction strategy is efficient, since it is agnostic to the downstream task, and uses concepts already known to the model.
arXiv Detail & Related papers (2024-07-19T17:50:11Z) - Coding for Intelligence from the Perspective of Category [66.14012258680992]
Coding targets compressing and reconstructing data, and intelligence.
Recent trends demonstrate the potential homogeneity of these two fields.
We propose a novel problem of Coding for Intelligence from the category theory view.
arXiv Detail & Related papers (2024-07-01T07:05:44Z) - A survey on Concept-based Approaches For Model Improvement [2.1516043775965565]
Concepts are known to be the thinking ground of humans.
We provide a systematic review and taxonomy of various concept representations and their discovery algorithms in Deep Neural Networks (DNNs)
We also provide details on concept-based model improvement literature marking the first comprehensive survey of these methods.
arXiv Detail & Related papers (2024-03-21T17:09:20Z) - Concept-based Explainable Artificial Intelligence: A Survey [16.580100294489508]
Using raw features to provide explanations has been disputed in several works lately.
A unified categorization and precise field definition are still missing.
This paper fills the gap by offering a thorough review of C-XAI approaches.
arXiv Detail & Related papers (2023-12-20T11:27:21Z) - Understanding the (Extra-)Ordinary: Validating Deep Model Decisions with Prototypical Concept-based Explanations [13.60538902487872]
We present a novel post-hoc concept-based XAI framework that conveys besides instance-wise (local) also class-wise (global) decision-making strategies via prototypes.
We demonstrate the effectiveness of our approach in identifying out-of-distribution samples, spurious model behavior and data quality issues across three datasets.
arXiv Detail & Related papers (2023-11-28T10:53:26Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Analyzing Representations inside Convolutional Neural Networks [8.803054559188048]
We propose a framework to categorize the concepts a network learns based on the way it clusters a set of input examples.
This framework is unsupervised and can work without any labels for input features.
We extensively evaluate the proposed method and demonstrate that it produces human-understandable and coherent concepts.
arXiv Detail & Related papers (2020-12-23T07:10:17Z) - A Survey on Assessing the Generalization Envelope of Deep Neural
Networks: Predictive Uncertainty, Out-of-distribution and Adversarial Samples [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art performance on numerous applications.
It is difficult to tell beforehand if a DNN receiving an input will deliver the correct output since their decision criteria are usually nontransparent.
This survey connects the three fields within the larger framework of investigating the generalization performance of machine learning methods and in particular DNNs.
arXiv Detail & Related papers (2020-08-21T09:12:52Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.