Post-hoc Concept Bottleneck Models
- URL: http://arxiv.org/abs/2205.15480v1
- Date: Tue, 31 May 2022 00:29:26 GMT
- Title: Post-hoc Concept Bottleneck Models
- Authors: Mert Yuksekgonul, Maggie Wang, James Zou
- Abstract summary: Concept Bottleneck Models (CBMs) map the inputs onto a set of interpretable concepts and use the concepts to make predictions.
CBMs are restrictive in practice as they require concept labels in the training data to learn the bottleneck and do not leverage strong pretrained models.
We show that we can turn any neural network into a PCBM without sacrificing model performance while still retaining interpretability benefits.
- Score: 11.358495577593441
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Concept Bottleneck Models (CBMs) map the inputs onto a set of interpretable
concepts (``the bottleneck'') and use the concepts to make predictions. A
concept bottleneck enhances interpretability since it can be investigated to
understand what concepts the model "sees" in an input and which of these
concepts are deemed important. However, CBMs are restrictive in practice as
they require concept labels in the training data to learn the bottleneck and do
not leverage strong pretrained models. Moreover, CBMs often do not match the
accuracy of an unrestricted neural network, reducing the incentive to deploy
them in practice. In this work, we address the limitations of CBMs by
introducing Post-hoc Concept Bottleneck models (PCBMs). We show that we can
turn any neural network into a PCBM without sacrificing model performance while
still retaining interpretability benefits. When concept annotation is not
available on the training data, we show that PCBM can transfer concepts from
other datasets or from natural language descriptions of concepts. PCBM also
enables users to quickly debug and update the model to reduce spurious
correlations and improve generalization to new (potentially different) data.
Through a model-editing user study, we show that editing PCBMs via
concept-level feedback can provide significant performance gains without using
any data from the target domain or model retraining.
Related papers
- How to Continually Adapt Text-to-Image Diffusion Models for Flexible Customization? [91.49559116493414]
We propose a novel Concept-Incremental text-to-image Diffusion Model (CIDM)
It can resolve catastrophic forgetting and concept neglect to learn new customization tasks in a concept-incremental manner.
Experiments validate that our CIDM surpasses existing custom diffusion models.
arXiv Detail & Related papers (2024-10-23T06:47:29Z) - MulCPred: Learning Multi-modal Concepts for Explainable Pedestrian Action Prediction [57.483718822429346]
MulCPred is proposed that explains its predictions based on multi-modal concepts represented by training samples.
MulCPred is evaluated on multiple datasets and tasks.
arXiv Detail & Related papers (2024-09-14T14:15:28Z) - Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery [52.498055901649025]
Concept Bottleneck Models (CBMs) have been proposed to address the 'black-box' problem of deep neural networks.
We propose a novel CBM approach -- called Discover-then-Name-CBM (DN-CBM) -- that inverts the typical paradigm.
Our concept extraction strategy is efficient, since it is agnostic to the downstream task, and uses concepts already known to the model.
arXiv Detail & Related papers (2024-07-19T17:50:11Z) - Interpretable Prognostics with Concept Bottleneck Models [5.939858158928473]
Concept Bottleneck Models (CBMs) are inherently interpretable neural network architectures based on concept explanations.
CBMs enable domain experts to intervene on the concept activations at test-time.
Our case studies demonstrate that the performance of CBMs can be on par or superior to black-box models.
arXiv Detail & Related papers (2024-05-27T18:15:40Z) - Editable Concept Bottleneck Models [36.38845338945026]
Concept Bottleneck Models (CBMs) have garnered much attention for their ability to elucidate the prediction process through a human-understandable concept layer.
In many scenarios, we always need to remove/insert some training data or new concepts from trained CBMs due to different reasons, such as privacy concerns, data mislabelling, spurious concepts, and concept annotation errors.
We propose Editable Concept Bottleneck Models (ECBMs) to address these challenges. Specifically, ECBMs support three different levels of data removal: concept-label-level, concept-level, and data-level.
arXiv Detail & Related papers (2024-05-24T11:55:46Z) - Improving Intervention Efficacy via Concept Realignment in Concept Bottleneck Models [57.86303579812877]
Concept Bottleneck Models (CBMs) ground image classification on human-understandable concepts to allow for interpretable model decisions.
Existing approaches often require numerous human interventions per image to achieve strong performances.
We introduce a trainable concept realignment intervention module, which leverages concept relations to realign concept assignments post-intervention.
arXiv Detail & Related papers (2024-05-02T17:59:01Z) - Incremental Residual Concept Bottleneck Models [29.388549499546556]
Concept Bottleneck Models (CBMs) map the black-box visual representations extracted by deep neural networks onto a set of interpretable concepts.
We propose the Incremental Residual Concept Bottleneck Model (Res-CBM) to address the challenge of concept completeness.
Our approach can be applied to any user-defined concept bank, as a post-hoc processing method to enhance the performance of any CBMs.
arXiv Detail & Related papers (2024-04-13T12:02:19Z) - Sparse Concept Bottleneck Models: Gumbel Tricks in Contrastive Learning [86.15009879251386]
We propose a novel architecture and method of explainable classification with Concept Bottleneck Models (CBM)
CBMs require an additional set of concepts to leverage.
We show a significant increase in accuracy using sparse hidden layers in CLIP-based bottleneck models.
arXiv Detail & Related papers (2024-04-04T09:43:43Z) - Do Concept Bottleneck Models Respect Localities? [14.77558378567965]
Concept-based methods explain model predictions using human-understandable concepts.
"Localities" involve using only relevant features when predicting a concept's value.
CBMs may not capture localities, even when independent concepts are localised to non-overlapping feature subsets.
arXiv Detail & Related papers (2024-01-02T16:05:23Z) - Auxiliary Losses for Learning Generalizable Concept-based Models [5.4066453042367435]
Concept Bottleneck Models (CBMs) have gained popularity since their introduction.
CBMs essentially limit the latent space of a model to human-understandable high-level concepts.
We propose cooperative-Concept Bottleneck Model (coop-CBM) to overcome the performance trade-off.
arXiv Detail & Related papers (2023-11-18T15:50:07Z) - Interpreting Pretrained Language Models via Concept Bottlenecks [55.47515772358389]
Pretrained language models (PLMs) have made significant strides in various natural language processing tasks.
The lack of interpretability due to their black-box'' nature poses challenges for responsible implementation.
We propose a novel approach to interpreting PLMs by employing high-level, meaningful concepts that are easily understandable for humans.
arXiv Detail & Related papers (2023-11-08T20:41:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.