Learning to Receive Help: Intervention-Aware Concept Embedding Models
- URL: http://arxiv.org/abs/2309.16928v2
- Date: Wed, 25 Oct 2023 15:38:19 GMT
- Title: Learning to Receive Help: Intervention-Aware Concept Embedding Models
- Authors: Mateo Espinosa Zarlenga, Katherine M. Collins, Krishnamurthy
Dvijotham, Adrian Weller, Zohreh Shams, Mateja Jamnik
- Abstract summary: Concept Bottleneck Models (CBMs) tackle the opacity of neural architectures by constructing and explaining their predictions using a set of high-level concepts.
Recent work has shown that intervention efficacy can be highly dependent on the order in which concepts are intervened.
We propose Intervention-aware Concept Embedding models (IntCEMs), a novel CBM-based architecture and training paradigm that improves a model's receptiveness to test-time interventions.
- Score: 46.7616863339095
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Concept Bottleneck Models (CBMs) tackle the opacity of neural architectures
by constructing and explaining their predictions using a set of high-level
concepts. A special property of these models is that they permit concept
interventions, wherein users can correct mispredicted concepts and thus improve
the model's performance. Recent work, however, has shown that intervention
efficacy can be highly dependent on the order in which concepts are intervened
on and on the model's architecture and training hyperparameters. We argue that
this is rooted in a CBM's lack of train-time incentives for the model to be
appropriately receptive to concept interventions. To address this, we propose
Intervention-aware Concept Embedding models (IntCEMs), a novel CBM-based
architecture and training paradigm that improves a model's receptiveness to
test-time interventions. Our model learns a concept intervention policy in an
end-to-end fashion from where it can sample meaningful intervention
trajectories at train-time. This conditions IntCEMs to effectively select and
receive concept interventions when deployed at test-time. Our experiments show
that IntCEMs significantly outperform state-of-the-art concept-interpretable
models when provided with test-time concept interventions, demonstrating the
effectiveness of our approach.
Related papers
- Stochastic Concept Bottleneck Models [8.391254800873599]
Concept Bottleneck Models (CBMs) have emerged as a promising interpretable method whose final prediction is based on human-understandable concepts.
We propose Concept Bottleneck Models (SCBMs), a novel approach that models concept dependencies.
A single-concept intervention affects all correlated concepts, thereby improving intervention effectiveness.
arXiv Detail & Related papers (2024-06-27T15:38:37Z) - AnyCBMs: How to Turn Any Black Box into a Concept Bottleneck Model [7.674744385997066]
Concept Bottleneck Models enhance the interpretability of neural networks by integrating a layer of human-understandable concepts.
"AnyCBM" transforms any existing trained model into a Concept Bottleneck Model with minimal impact on computational resources.
arXiv Detail & Related papers (2024-05-26T10:19:04Z) - The Buffer Mechanism for Multi-Step Information Reasoning in Language Models [52.77133661679439]
Investigating internal reasoning mechanisms of large language models can help us design better model architectures and training strategies.
In this study, we constructed a symbolic dataset to investigate the mechanisms by which Transformer models employ vertical thinking strategy.
We proposed a random matrix-based algorithm to enhance the model's reasoning ability, resulting in a 75% reduction in the training time required for the GPT-2 model.
arXiv Detail & Related papers (2024-05-24T07:41:26Z) - Improving Intervention Efficacy via Concept Realignment in Concept Bottleneck Models [57.86303579812877]
Concept Bottleneck Models (CBMs) ground image classification on human-understandable concepts to allow for interpretable model decisions.
Existing approaches often require numerous human interventions per image to achieve strong performances.
We introduce a trainable concept realignment intervention module, which leverages concept relations to realign concept assignments post-intervention.
arXiv Detail & Related papers (2024-05-02T17:59:01Z) - Beyond Concept Bottleneck Models: How to Make Black Boxes Intervenable? [8.391254800873599]
We introduce a method to perform concept-based interventions on pretrained neural networks, which are not interpretable by design.
We formalise the notion of intervenability as a measure of the effectiveness of concept-based interventions and leverage this definition to fine-tune black boxes.
arXiv Detail & Related papers (2024-01-24T16:02:14Z) - ConcEPT: Concept-Enhanced Pre-Training for Language Models [57.778895980999124]
ConcEPT aims to infuse conceptual knowledge into pre-trained language models.
It exploits external entity concept prediction to predict the concepts of entities mentioned in the pre-training contexts.
Results of experiments show that ConcEPT gains improved conceptual knowledge with concept-enhanced pre-training.
arXiv Detail & Related papers (2024-01-11T05:05:01Z) - Concept Embedding Models [27.968589555078328]
Concept bottleneck models promote trustworthiness by conditioning classification tasks on an intermediate level of human-like concepts.
Existing concept bottleneck models are unable to find optimal compromises between high task accuracy, robust concept-based explanations, and effective interventions on concepts.
We propose Concept Embedding Models, a novel family of concept bottleneck models which goes beyond the current accuracy-vs-interpretability trade-off by learning interpretable high-dimensional concept representations.
arXiv Detail & Related papers (2022-09-19T14:49:36Z) - Explain, Edit, and Understand: Rethinking User Study Design for
Evaluating Model Explanations [97.91630330328815]
We conduct a crowdsourcing study, where participants interact with deception detection models that have been trained to distinguish between genuine and fake hotel reviews.
We observe that for a linear bag-of-words model, participants with access to the feature coefficients during training are able to cause a larger reduction in model confidence in the testing phase when compared to the no-explanation control.
arXiv Detail & Related papers (2021-12-17T18:29:56Z) - Control as Hybrid Inference [62.997667081978825]
We present an implementation of CHI which naturally mediates the balance between iterative and amortised inference.
We verify the scalability of our algorithm on a continuous control benchmark, demonstrating that it outperforms strong model-free and model-based baselines.
arXiv Detail & Related papers (2020-07-11T19:44:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.