Probabilistic Concept Bottleneck Models
- URL: http://arxiv.org/abs/2306.01574v1
- Date: Fri, 2 Jun 2023 14:38:58 GMT
- Title: Probabilistic Concept Bottleneck Models
- Authors: Eunji Kim, Dahuin Jung, Sangha Park, Siwon Kim, Sungroh Yoon
- Abstract summary: Interpretable models are designed to make decisions in a human-interpretable manner.
In this study, we address the ambiguity issue that can harm reliability.
We propose Probabilistic Concept Bottleneck Models (ProbCBM)
- Score: 26.789507935869107
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Interpretable models are designed to make decisions in a human-interpretable
manner. Representatively, Concept Bottleneck Models (CBM) follow a two-step
process of concept prediction and class prediction based on the predicted
concepts. CBM provides explanations with high-level concepts derived from
concept predictions; thus, reliable concept predictions are important for
trustworthiness. In this study, we address the ambiguity issue that can harm
reliability. While the existence of a concept can often be ambiguous in the
data, CBM predicts concepts deterministically without considering this
ambiguity. To provide a reliable interpretation against this ambiguity, we
propose Probabilistic Concept Bottleneck Models (ProbCBM). By leveraging
probabilistic concept embeddings, ProbCBM models uncertainty in concept
prediction and provides explanations based on the concept and its corresponding
uncertainty. This uncertainty enhances the reliability of the explanations.
Furthermore, as class uncertainty is derived from concept uncertainty in
ProbCBM, we can explain class uncertainty by means of concept uncertainty. Code
is publicly available at https://github.com/ejkim47/prob-cbm.
Related papers
- MulCPred: Learning Multi-modal Concepts for Explainable Pedestrian Action Prediction [57.483718822429346]
MulCPred is proposed that explains its predictions based on multi-modal concepts represented by training samples.
MulCPred is evaluated on multiple datasets and tasks.
arXiv Detail & Related papers (2024-09-14T14:15:28Z) - Evidential Concept Embedding Models: Towards Reliable Concept Explanations for Skin Disease Diagnosis [24.946148305384202]
Concept Bottleneck Models (CBM) have emerged as an active interpretable framework incorporating human-interpretable concepts into decision-making.
We propose an evidential Concept Embedding Model (evi-CEM) which employs evidential learning to model the concept uncertainty.
Our evaluation demonstrates that evi-CEM achieves superior performance in terms of concept prediction.
arXiv Detail & Related papers (2024-06-27T12:29:50Z) - On the Concept Trustworthiness in Concept Bottleneck Models [39.928868605678744]
Concept Bottleneck Models (CBMs) break down the reasoning process into the input-to-concept mapping and the concept-to-label prediction.
Despite the transparency of the concept-to-label prediction, the mapping from the input to the intermediate concept remains a black box.
A pioneering metric, referred to as concept trustworthiness score, is proposed to gauge whether the concepts are derived from relevant regions.
An enhanced CBM is introduced, enabling concept predictions to be made specifically from distinct parts of the feature map.
arXiv Detail & Related papers (2024-03-21T12:24:53Z) - Energy-Based Concept Bottleneck Models: Unifying Prediction, Concept Intervention, and Probabilistic Interpretations [15.23014992362639]
Concept bottleneck models (CBMs) have been successful in providing concept-based interpretations for black-box deep learning models.
We propose Energy-based Concept Bottleneck Models (ECBMs)
Our ECBMs use a set of neural networks to define the joint energy of candidate (input, concept, class) quantifications.
arXiv Detail & Related papers (2024-01-25T12:46:37Z) - Do Concept Bottleneck Models Respect Localities? [14.77558378567965]
Concept-based methods explain model predictions using human-understandable concepts.
"Localities" involve using only relevant features when predicting a concept's value.
CBMs may not capture localities, even when independent concepts are localised to non-overlapping feature subsets.
arXiv Detail & Related papers (2024-01-02T16:05:23Z) - Statistically Significant Concept-based Explanation of Image Classifiers
via Model Knockoffs [22.576922942465142]
Concept-based explanations may cause false positives, which misregards unrelated concepts as important for the prediction task.
We propose a method using a deep learning model to learn the image concept and then using the Knockoff samples to select the important concepts for prediction.
arXiv Detail & Related papers (2023-05-27T05:40:05Z) - Concept Gradient: Concept-based Interpretation Without Linear Assumption [77.96338722483226]
Concept Activation Vector (CAV) relies on learning a linear relation between some latent representation of a given model and concepts.
We proposed Concept Gradient (CG), extending concept-based interpretation beyond linear concept functions.
We demonstrated CG outperforms CAV in both toy examples and real world datasets.
arXiv Detail & Related papers (2022-08-31T17:06:46Z) - Uncertainty estimation of pedestrian future trajectory using Bayesian
approximation [137.00426219455116]
Under dynamic traffic scenarios, planning based on deterministic predictions is not trustworthy.
The authors propose to quantify uncertainty during forecasting using approximation which deterministic approaches fail to capture.
The effect of dropout weights and long-term prediction on future state uncertainty has been studied.
arXiv Detail & Related papers (2022-05-04T04:23:38Z) - Kernelized Concept Erasure [108.65038124096907]
We propose a kernelization of a linear minimax game for concept erasure.
It is possible to prevent specific non-linear adversaries from predicting the concept.
However, the protection does not transfer to different nonlinear adversaries.
arXiv Detail & Related papers (2022-01-28T15:45:13Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.