Deferring Concept Bottleneck Models: Learning to Defer Interventions to Inaccurate Experts
- URL: http://arxiv.org/abs/2503.16199v1
- Date: Thu, 20 Mar 2025 14:45:55 GMT
- Title: Deferring Concept Bottleneck Models: Learning to Defer Interventions to Inaccurate Experts
- Authors: Andrea Pugnana, Riccardo Massidda, Francesco Giannini, Pietro Barbiero, Mateo Espinosa Zarlenga, Roberto Pellungrini, Gabriele Dominici, Fosca Giannotti, Davide Bacciu,
- Abstract summary: Concept Bottleneck Models (CBMs) are machine learning models that improve interpretability.<n>CBMs assume the availability of humans that can identify the need to intervene and always provide correct interventions.<n>We propose Deferring CBMs (DCBMs), a novel framework that allows CBMs to learn when an intervention is needed.
- Score: 21.771324228992675
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Concept Bottleneck Models (CBMs) are machine learning models that improve interpretability by grounding their predictions on human-understandable concepts, allowing for targeted interventions in their decision-making process. However, when intervened on, CBMs assume the availability of humans that can identify the need to intervene and always provide correct interventions. Both assumptions are unrealistic and impractical, considering labor costs and human error-proneness. In contrast, Learning to Defer (L2D) extends supervised learning by allowing machine learning models to identify cases where a human is more likely to be correct than the model, thus leading to deferring systems with improved performance. In this work, we gain inspiration from L2D and propose Deferring CBMs (DCBMs), a novel framework that allows CBMs to learn when an intervention is needed. To this end, we model DCBMs as a composition of deferring systems and derive a consistent L2D loss to train them. Moreover, by relying on a CBM architecture, DCBMs can explain why defer occurs on the final task. Our results show that DCBMs achieve high predictive performance and interpretability at the cost of deferring more to humans.
Related papers
- Beyond Words: Evaluating and Bridging Epistemic Divergence in User-Agent Interaction via Theory of Mind [8.740788873949471]
Large Language Models (LLMs) have developed rapidly and are widely applied to both general-purpose and professional tasks.<n>They still struggle to comprehend and respond to the true user needs when intentions and instructions are imprecisely conveyed.
arXiv Detail & Related papers (2026-02-14T16:01:59Z) - Counterfactual Training: Teaching Models Plausible and Actionable Explanations [52.967743166658984]
We propose a novel training regime termed counterfactual training to increase the explanatory capacity of models.<n>Counterfactual explanations have emerged as a popular post-hoc explanation method for opaque machine learning models.
arXiv Detail & Related papers (2026-01-22T18:56:14Z) - Controllable Concept Bottleneck Models [55.03639763625018]
Controllable Concept Bottleneck Models (CCBMs)<n>CCBMs support three granularities of model editing: concept-label-level, concept-level, and data-level.<n>CCBMs enjoy mathematically rigorous closed-form approximations derived from influence functions that obviate the need for retraining.
arXiv Detail & Related papers (2026-01-01T19:30:06Z) - The Impact of Concept Explanations and Interventions on Human-Machine Collaboration [0.03999851878220877]
Concept Bottleneck Models (CBMs) were introduced to predict human-defined concepts as an intermediate step before predicting task labels.<n>CBMs improve interpretability compared to standard Deep Neural Networks (DNNs)<n>However, this increased alignment did not translate to a significant increase in task accuracy.
arXiv Detail & Related papers (2025-10-19T16:44:24Z) - IntentionVLA: Generalizable and Efficient Embodied Intention Reasoning for Human-Robot Interaction [51.130510883952546]
Vision-Language-Action (VLA) models leverage pretrained vision-language models (VLMs) to couple perception with robotic control.<n>We propose textbfIntentionVLA, a VLA framework with a curriculum training paradigm and an efficient inference mechanism.<n>Our proposed method first leverages carefully designed reasoning data that combine intention inference, spatial grounding, and compact embodied reasoning.
arXiv Detail & Related papers (2025-10-09T04:49:46Z) - Using Reinforcement Learning to Train Large Language Models to Explain Human Decisions [11.40240971657506]
In this work, we explore the potential of pretrained large language models to serve as dual-purpose cognitive models.<n>We employ reinforcement learning with outcome-based rewards to guide LLMs toward generating explicit reasoning traces for explaining human risky choices.
arXiv Detail & Related papers (2025-05-16T18:22:05Z) - Latent Diffusion Planning for Imitation Learning [78.56207566743154]
Latent Diffusion Planning (LDP) is a modular approach consisting of a planner and inverse dynamics model.
By separating planning from action prediction, LDP can benefit from the denser supervision signals of suboptimal and action-free data.
On simulated visual robotic manipulation tasks, LDP outperforms state-of-the-art imitation learning approaches.
arXiv Detail & Related papers (2025-04-23T17:53:34Z) - A Data-Centric Revisit of Pre-Trained Vision Models for Robot Learning [67.72413262980272]
Pre-trained vision models (PVMs) are fundamental to modern robotics, yet their optimal configuration remains unclear.<n>We develop SlotMIM, a method that induces object-centric representations by introducing a semantic bottleneck.<n>Our approach achieves significant improvements over prior work in image recognition, scene understanding, and robot learning evaluations.
arXiv Detail & Related papers (2025-03-10T06:18:31Z) - Enhancing CBMs Through Binary Distillation with Applications to Test-Time Intervention [6.31833744906105]
We use Fast Interpretable Greedy Sum-Trees(FIGS) to obtain Binary Distillation(BD)<n>FIGS-BD distills a binary-augmented concept-to-target portion of the CBM into an interpretable tree-based model.<n>We demonstrate that adaptive test-time intervention identifies key concepts that significantly improve performance for realistic human-in-the-loop settings.
arXiv Detail & Related papers (2025-03-09T19:03:48Z) - Predicting and Understanding Human Action Decisions: Insights from Large Language Models and Cognitive Instance-Based Learning [0.0]
Large Language Models (LLMs) have demonstrated their capabilities across various tasks.
This paper exploits the reasoning and generative capabilities of the LLMs to predict human behavior in two sequential decision-making tasks.
We compare the performance of LLMs with a cognitive instance-based learning model, which imitates human experiential decision-making.
arXiv Detail & Related papers (2024-07-12T14:13:06Z) - Improving Intervention Efficacy via Concept Realignment in Concept Bottleneck Models [57.86303579812877]
Concept Bottleneck Models (CBMs) ground image classification on human-understandable concepts to allow for interpretable model decisions.
Existing approaches often require numerous human interventions per image to achieve strong performances.
We introduce a trainable concept realignment intervention module, which leverages concept relations to realign concept assignments post-intervention.
arXiv Detail & Related papers (2024-05-02T17:59:01Z) - DistiLLM: Towards Streamlined Distillation for Large Language Models [53.46759297929675]
DistiLLM is a more effective and efficient KD framework for auto-regressive language models.
DisiLLM comprises two components: (1) a novel skew Kullback-Leibler divergence loss, where we unveil and leverage its theoretical properties, and (2) an adaptive off-policy approach designed to enhance the efficiency in utilizing student-generated outputs.
arXiv Detail & Related papers (2024-02-06T11:10:35Z) - Learning to Intervene on Concept Bottlenecks [23.949827380111476]
Concept bottleneck memory models (CB2Ms) leverage a two-fold memory to generalize interventions to appropriate novel situations.
CB2Ms are able to successfully generalize interventions to unseen data and can indeed identify wrongly inferred concepts.
arXiv Detail & Related papers (2023-08-25T15:54:22Z) - Human Trajectory Forecasting with Explainable Behavioral Uncertainty [63.62824628085961]
Human trajectory forecasting helps to understand and predict human behaviors, enabling applications from social robots to self-driving cars.
Model-free methods offer superior prediction accuracy but lack explainability, while model-based methods provide explainability but cannot predict well.
We show that BNSP-SFM achieves up to a 50% improvement in prediction accuracy, compared with 11 state-of-the-art methods.
arXiv Detail & Related papers (2023-07-04T16:45:21Z) - Explain, Edit, and Understand: Rethinking User Study Design for
Evaluating Model Explanations [97.91630330328815]
We conduct a crowdsourcing study, where participants interact with deception detection models that have been trained to distinguish between genuine and fake hotel reviews.
We observe that for a linear bag-of-words model, participants with access to the feature coefficients during training are able to cause a larger reduction in model confidence in the testing phase when compared to the no-explanation control.
arXiv Detail & Related papers (2021-12-17T18:29:56Z) - Faithful and Plausible Explanations of Medical Code Predictions [12.156363504753244]
Explanations must balance faithfulness to the model's decision-making with their plausibility to a domain expert.
We train a proxy model that mimics the behavior of the trained model and provides fine-grained control over these trade-offs.
We evaluate our approach on the task of assigning ICD codes to clinical notes to demonstrate that explanations from the proxy model are faithful and replicate the trained model behavior.
arXiv Detail & Related papers (2021-04-16T05:13:36Z) - Deep Interpretable Models of Theory of Mind For Human-Agent Teaming [0.7734726150561086]
We develop an interpretable modular neural framework for modeling the intentions of other observed entities.
We demonstrate the efficacy of our approach with experiments on data from human participants on a search and rescue task in Minecraft.
arXiv Detail & Related papers (2021-04-07T06:18:58Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.