Adaptive Test-Time Intervention for Concept Bottleneck Models
- URL: http://arxiv.org/abs/2503.06730v2
- Date: Mon, 14 Apr 2025 16:39:33 GMT
- Title: Adaptive Test-Time Intervention for Concept Bottleneck Models
- Authors: Matthew Shen, Aliyah Hsu, Abhineet Agarwal, Bin Yu,
- Abstract summary: Concept bottleneck models (CBM) aim to improve model interpretability by predicting human level "concepts"<n>We propose to use Fast Interpretable Greedy Sum-Trees (FIGS) to obtain Binary Distillation (BD)<n>FIGS-BD distills a binary-augmented concept-to-target portion of the CBM into an interpretable tree-based model.
- Score: 6.31833744906105
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Concept bottleneck models (CBM) aim to improve model interpretability by predicting human level "concepts" in a bottleneck within a deep learning model architecture. However, how the predicted concepts are used in predicting the target still either remains black-box or is simplified to maintain interpretability at the cost of prediction performance. We propose to use Fast Interpretable Greedy Sum-Trees (FIGS) to obtain Binary Distillation (BD). This new method, called FIGS-BD, distills a binary-augmented concept-to-target portion of the CBM into an interpretable tree-based model, while maintaining the competitive prediction performance of the CBM teacher. FIGS-BD can be used in downstream tasks to explain and decompose CBM predictions into interpretable binary-concept-interaction attributions and guide adaptive test-time intervention. Across 4 datasets, we demonstrate that our adaptive test-time intervention identifies key concepts that significantly improve performance for realistic human-in-the-loop settings that only allow for limited concept interventions.
Related papers
- LLM Pretraining with Continuous Concepts [71.98047075145249]
Next token prediction has been the standard training objective used in large language model pretraining.
We propose Continuous Concept Mixing (CoCoMix), a novel pretraining framework that combines discrete next token prediction with continuous concepts.
arXiv Detail & Related papers (2025-02-12T16:00:11Z) - Survival Concept-Based Learning Models [2.024925013349319]
Two novel models are proposed to integrate concept-based learning with survival analysis.<n>SurvCBM is based on the architecture of the well-known concept bottleneck model.<n>SurvRCM uses concepts as regularization to enhance accuracy.
arXiv Detail & Related papers (2025-02-09T16:41:04Z) - VLG-CBM: Training Concept Bottleneck Models with Vision-Language Guidance [16.16577751549164]
Concept Bottleneck Models (CBMs) provide interpretable prediction.
CBMs encode human-understandable concepts to explain models' decision.
We propose Vision-Language-Guided Concept Bottleneck Model (VLG-CBM) to enable faithful interpretability.
arXiv Detail & Related papers (2024-07-18T19:44:44Z) - Stochastic Concept Bottleneck Models [8.391254800873599]
Concept Bottleneck Models (CBMs) have emerged as a promising interpretable method whose final prediction is based on human-understandable concepts.
We propose Concept Bottleneck Models (SCBMs), a novel approach that models concept dependencies.
A single-concept intervention affects all correlated concepts, thereby improving intervention effectiveness.
arXiv Detail & Related papers (2024-06-27T15:38:37Z) - CogDPM: Diffusion Probabilistic Models via Cognitive Predictive Coding [62.075029712357]
This work introduces the Cognitive Diffusion Probabilistic Models (CogDPM)
CogDPM features a precision estimation method based on the hierarchical sampling capabilities of diffusion models and weight the guidance with precision weights estimated by the inherent property of diffusion models.
We apply CogDPM to real-world prediction tasks using the United Kindom precipitation and surface wind datasets.
arXiv Detail & Related papers (2024-05-03T15:54:50Z) - Improving Intervention Efficacy via Concept Realignment in Concept Bottleneck Models [57.86303579812877]
Concept Bottleneck Models (CBMs) ground image classification on human-understandable concepts to allow for interpretable model decisions.
Existing approaches often require numerous human interventions per image to achieve strong performances.
We introduce a trainable concept realignment intervention module, which leverages concept relations to realign concept assignments post-intervention.
arXiv Detail & Related papers (2024-05-02T17:59:01Z) - Towards Generalizable and Interpretable Motion Prediction: A Deep
Variational Bayes Approach [54.429396802848224]
This paper proposes an interpretable generative model for motion prediction with robust generalizability to out-of-distribution cases.
For interpretability, the model achieves the target-driven motion prediction by estimating the spatial distribution of long-term destinations.
Experiments on motion prediction datasets validate that the fitted model can be interpretable and generalizable.
arXiv Detail & Related papers (2024-03-10T04:16:04Z) - Beyond Concept Bottleneck Models: How to Make Black Boxes Intervenable? [8.391254800873599]
We introduce a method to perform concept-based interventions on pretrained neural networks, which are not interpretable by design.
We formalise the notion of intervenability as a measure of the effectiveness of concept-based interventions and leverage this definition to fine-tune black boxes.
arXiv Detail & Related papers (2024-01-24T16:02:14Z) - Do Concept Bottleneck Models Respect Localities? [14.77558378567965]
Concept-based methods explain model predictions using human-understandable concepts.
"Localities" involve using only relevant features when predicting a concept's value.
CBMs may not capture localities, even when independent concepts are localised to non-overlapping feature subsets.
arXiv Detail & Related papers (2024-01-02T16:05:23Z) - Learning to Receive Help: Intervention-Aware Concept Embedding Models [44.1307928713715]
Concept Bottleneck Models (CBMs) tackle the opacity of neural architectures by constructing and explaining their predictions using a set of high-level concepts.
Recent work has shown that intervention efficacy can be highly dependent on the order in which concepts are intervened.
We propose Intervention-aware Concept Embedding models (IntCEMs), a novel CBM-based architecture and training paradigm that improves a model's receptiveness to test-time interventions.
arXiv Detail & Related papers (2023-09-29T02:04:24Z) - Human Trajectory Forecasting with Explainable Behavioral Uncertainty [63.62824628085961]
Human trajectory forecasting helps to understand and predict human behaviors, enabling applications from social robots to self-driving cars.
Model-free methods offer superior prediction accuracy but lack explainability, while model-based methods provide explainability but cannot predict well.
We show that BNSP-SFM achieves up to a 50% improvement in prediction accuracy, compared with 11 state-of-the-art methods.
arXiv Detail & Related papers (2023-07-04T16:45:21Z) - Post-hoc Concept Bottleneck Models [11.358495577593441]
Concept Bottleneck Models (CBMs) map the inputs onto a set of interpretable concepts and use the concepts to make predictions.
CBMs are restrictive in practice as they require concept labels in the training data to learn the bottleneck and do not leverage strong pretrained models.
We show that we can turn any neural network into a PCBM without sacrificing model performance while still retaining interpretability benefits.
arXiv Detail & Related papers (2022-05-31T00:29:26Z) - Hybrid Predictive Coding: Inferring, Fast and Slow [62.997667081978825]
We propose a hybrid predictive coding network that combines both iterative and amortized inference in a principled manner.
We demonstrate that our model is inherently sensitive to its uncertainty and adaptively balances balances to obtain accurate beliefs using minimum computational expense.
arXiv Detail & Related papers (2022-04-05T12:52:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.