Concept Embedding Models
- URL: http://arxiv.org/abs/2209.09056v1
- Date: Mon, 19 Sep 2022 14:49:36 GMT
- Title: Concept Embedding Models
- Authors: Mateo Espinosa Zarlenga, Pietro Barbiero, Gabriele Ciravegna, Giuseppe
Marra, Francesco Giannini, Michelangelo Diligenti, Zohreh Shams, Frederic
Precioso, Stefano Melacci, Adrian Weller, Pietro Lio, Mateja Jamnik
- Abstract summary: Concept bottleneck models promote trustworthiness by conditioning classification tasks on an intermediate level of human-like concepts.
Existing concept bottleneck models are unable to find optimal compromises between high task accuracy, robust concept-based explanations, and effective interventions on concepts.
We propose Concept Embedding Models, a novel family of concept bottleneck models which goes beyond the current accuracy-vs-interpretability trade-off by learning interpretable high-dimensional concept representations.
- Score: 27.968589555078328
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deploying AI-powered systems requires trustworthy models supporting effective
human interactions, going beyond raw prediction accuracy. Concept bottleneck
models promote trustworthiness by conditioning classification tasks on an
intermediate level of human-like concepts. This enables human interventions
which can correct mispredicted concepts to improve the model's performance.
However, existing concept bottleneck models are unable to find optimal
compromises between high task accuracy, robust concept-based explanations, and
effective interventions on concepts -- particularly in real-world conditions
where complete and accurate concept supervisions are scarce. To address this,
we propose Concept Embedding Models, a novel family of concept bottleneck
models which goes beyond the current accuracy-vs-interpretability trade-off by
learning interpretable high-dimensional concept representations. Our
experiments demonstrate that Concept Embedding Models (1) attain better or
competitive task accuracy w.r.t. standard neural models without concepts, (2)
provide concept representations capturing meaningful semantics including and
beyond their ground truth labels, (3) support test-time concept interventions
whose effect in test accuracy surpasses that in standard concept bottleneck
models, and (4) scale to real-world conditions where complete concept
supervisions are scarce.
Related papers
- On the Fairness, Diversity and Reliability of Text-to-Image Generative Models [49.60774626839712]
multimodal generative models have sparked critical discussions on their fairness, reliability, and potential for misuse.
We propose an evaluation framework designed to assess model reliability through their responses to perturbations in the embedding space.
Our method lays the groundwork for detecting unreliable, bias-injected models and retrieval of bias provenance.
arXiv Detail & Related papers (2024-11-21T09:46:55Z) - Improving Intervention Efficacy via Concept Realignment in Concept Bottleneck Models [57.86303579812877]
Concept Bottleneck Models (CBMs) ground image classification on human-understandable concepts to allow for interpretable model decisions.
Existing approaches often require numerous human interventions per image to achieve strong performances.
We introduce a trainable concept realignment intervention module, which leverages concept relations to realign concept assignments post-intervention.
arXiv Detail & Related papers (2024-05-02T17:59:01Z) - ConcEPT: Concept-Enhanced Pre-Training for Language Models [57.778895980999124]
ConcEPT aims to infuse conceptual knowledge into pre-trained language models.
It exploits external entity concept prediction to predict the concepts of entities mentioned in the pre-training contexts.
Results of experiments show that ConcEPT gains improved conceptual knowledge with concept-enhanced pre-training.
arXiv Detail & Related papers (2024-01-11T05:05:01Z) - Auxiliary Losses for Learning Generalizable Concept-based Models [5.4066453042367435]
Concept Bottleneck Models (CBMs) have gained popularity since their introduction.
CBMs essentially limit the latent space of a model to human-understandable high-level concepts.
We propose cooperative-Concept Bottleneck Model (coop-CBM) to overcome the performance trade-off.
arXiv Detail & Related papers (2023-11-18T15:50:07Z) - Interpreting Pretrained Language Models via Concept Bottlenecks [55.47515772358389]
Pretrained language models (PLMs) have made significant strides in various natural language processing tasks.
The lack of interpretability due to their black-box'' nature poses challenges for responsible implementation.
We propose a novel approach to interpreting PLMs by employing high-level, meaningful concepts that are easily understandable for humans.
arXiv Detail & Related papers (2023-11-08T20:41:18Z) - Learning to Receive Help: Intervention-Aware Concept Embedding Models [44.1307928713715]
Concept Bottleneck Models (CBMs) tackle the opacity of neural architectures by constructing and explaining their predictions using a set of high-level concepts.
Recent work has shown that intervention efficacy can be highly dependent on the order in which concepts are intervened.
We propose Intervention-aware Concept Embedding models (IntCEMs), a novel CBM-based architecture and training paradigm that improves a model's receptiveness to test-time interventions.
arXiv Detail & Related papers (2023-09-29T02:04:24Z) - Interpretable Neural-Symbolic Concept Reasoning [7.1904050674791185]
Concept-based models aim to address this issue by learning tasks based on a set of human-understandable concepts.
We propose the Deep Concept Reasoner (DCR), the first interpretable concept-based model that builds upon concept embeddings.
arXiv Detail & Related papers (2023-04-27T09:58:15Z) - Translational Concept Embedding for Generalized Compositional Zero-shot
Learning [73.60639796305415]
Generalized compositional zero-shot learning means to learn composed concepts of attribute-object pairs in a zero-shot fashion.
This paper introduces a new approach, termed translational concept embedding, to solve these two difficulties in a unified framework.
arXiv Detail & Related papers (2021-12-20T21:27:51Z) - Do Concept Bottleneck Models Learn as Intended? [29.842609351878416]
We find that concept bottleneck models struggle to meet interpretability, predictability, and intervenability goals.
Using post hoc interpretability methods, we demonstrate that concepts do not correspond to anything semantically meaningful in input space.
arXiv Detail & Related papers (2021-05-10T12:00:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.