Selective Concept Models: Permitting Stakeholder Customisation at
Test-Time
- URL: http://arxiv.org/abs/2306.08424v1
- Date: Wed, 14 Jun 2023 10:37:13 GMT
- Title: Selective Concept Models: Permitting Stakeholder Customisation at
Test-Time
- Authors: Matthew Barker, Katherine M. Collins, Krishnamurthy Dvijotham, Adrian
Weller, Umang Bhatt
- Abstract summary: We propose Selective COncept Models (SCOMs) which make predictions using only a subset of concepts.
We show that SCOMs only require a fraction of the total concepts to achieve optimal accuracy on multiple real-world datasets.
Using CUB-Sel, we show that humans have unique individual preferences for the choice of concepts they prefer to reason about.
- Score: 32.138390859351425
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Concept-based models perform prediction using a set of concepts that are
interpretable to stakeholders. However, such models often involve a fixed,
large number of concepts, which may place a substantial cognitive load on
stakeholders. We propose Selective COncept Models (SCOMs) which make
predictions using only a subset of concepts and can be customised by
stakeholders at test-time according to their preferences. We show that SCOMs
only require a fraction of the total concepts to achieve optimal accuracy on
multiple real-world datasets. Further, we collect and release a new dataset,
CUB-Sel, consisting of human concept set selections for 900 bird images from
the popular CUB dataset. Using CUB-Sel, we show that humans have unique
individual preferences for the choice of concepts they prefer to reason about,
and struggle to identify the most theoretically informative concepts. The
customisation and concept selection provided by SCOM improves the efficiency of
interpretation and intervention for stakeholders.
Related papers
- Interpretable Reward Modeling with Active Concept Bottlenecks [54.00085739303773]
We introduce Concept Bottleneck Reward Models (CB-RM), a reward modeling framework that enables interpretable preference learning.<n>Unlike standard RLHF methods that rely on opaque reward functions, CB-RM decomposes reward prediction into human-interpretable concepts.<n>We formalize an active learning strategy that dynamically acquires the most informative concept labels.
arXiv Detail & Related papers (2025-07-07T06:26:04Z) - Personalized Interpretability -- Interactive Alignment of Prototypical Parts Networks [16.958657905772846]
Concept-based interpretable neural networks have gained significant attention due to their intuitive and easy-to-understand explanations.<n>A major limitation is that these explanations may not always be comprehensible to users due to concept inconsistency.<n>This inconsistency breaks the alignment between model reasoning and human understanding.<n>We introduce YoursProtoP, a novel interactive strategy that enables the personalization of prototypical parts.
arXiv Detail & Related papers (2025-06-05T19:30:20Z) - Individualised Counterfactual Examples Using Conformal Prediction Intervals [12.895240620484572]
High-dimensional feature spaces that are typical of machine learning classification models admit many possible counterfactual examples to a decision.<n>We explicitly model the knowledge of the individual, and assess the uncertainty of predictions which the individual makes by the width of a conformal prediction interval.<n>We present a synthetic data set on a hypercube which allows us to fully visualise the decision boundary.<n>Second, in this synthetic data set we explore the impact of a single CPICF on the knowledge of an individual locally around the original query.
arXiv Detail & Related papers (2025-05-28T13:13:52Z) - Beyond Predictions: A Participatory Framework for Multi-Stakeholder Decision-Making [3.3044728148521623]
We propose a novel participatory framework that redefines decision-making as a multi-stakeholder optimization problem.
Our framework captures each actor's preferences through context-dependent reward functions.
We introduce a synthetic scoring mechanism that exploits user-defined preferences across multiple metrics to rank decision-making strategies.
arXiv Detail & Related papers (2025-02-12T16:27:40Z) - LLM Pretraining with Continuous Concepts [71.98047075145249]
Next token prediction has been the standard training objective used in large language model pretraining.
We propose Continuous Concept Mixing (CoCoMix), a novel pretraining framework that combines discrete next token prediction with continuous concepts.
arXiv Detail & Related papers (2025-02-12T16:00:11Z) - Interactive Visualization Recommendation with Hier-SUCB [52.11209329270573]
We propose an interactive personalized visualization recommendation (PVisRec) system that learns on user feedback from previous interactions.
For more interactive and accurate recommendations, we propose Hier-SUCB, a contextual semi-bandit in the PVisRec setting.
arXiv Detail & Related papers (2025-02-05T17:14:45Z) - Personalized Preference Fine-tuning of Diffusion Models [75.22218338096316]
We introduce PPD, a multi-reward optimization objective that aligns diffusion models with personalized preferences.
With PPD, a diffusion model learns the individual preferences of a population of users in a few-shot way.
Our approach achieves an average win rate of 76% over Stable Cascade, generating images that more accurately reflect specific user preferences.
arXiv Detail & Related papers (2025-01-11T22:38:41Z) - Diverse Concept Proposals for Concept Bottleneck Models [23.395270888378594]
Concept bottleneck models are interpretable predictive models that are often used in domains where model trust is a key priority, such as healthcare.
Our proposed approach identifies a number of predictive concepts that explain the data.
By offering multiple alternative explanations, we allow the human expert to choose the one that best aligns with their expectation.
arXiv Detail & Related papers (2024-12-24T00:12:34Z) - ComPO: Community Preferences for Language Model Personalization [122.54846260663922]
ComPO is a method to personalize preference optimization in language models.
We collect and release ComPRed, a question answering dataset with community-level preferences from Reddit.
arXiv Detail & Related papers (2024-10-21T14:02:40Z) - Bayesian Concept Bottleneck Models with LLM Priors [9.368695619127084]
Concept Bottleneck Models (CBMs) have been proposed as a compromise between white-box and black-box models, aiming to achieve interpretability without sacrificing accuracy.
This work investigates a novel approach that sidesteps these challenges: BC-LLM iteratively searches over a potentially infinite set of concepts within a Bayesian framework, in which Large Language Models (LLMs) serve as both a concept extraction mechanism and prior.
arXiv Detail & Related papers (2024-10-21T01:00:33Z) - Visual Data Diagnosis and Debiasing with Concept Graphs [50.84781894621378]
We present ConBias, a framework for diagnosing and mitigating Concept co-occurrence Biases in visual datasets.
We show that by employing a novel clique-based concept balancing strategy, we can mitigate these imbalances, leading to enhanced performance on downstream tasks.
arXiv Detail & Related papers (2024-09-26T16:59:01Z) - Explain via Any Concept: Concept Bottleneck Model with Open Vocabulary Concepts [8.028021897214238]
"OpenCBM" is the first CBM with concepts of open vocabularies.
Our model significantly outperforms the previous state-of-the-art CBM by 9% in the classification accuracy on the benchmark dataset CUB-200-2011.
arXiv Detail & Related papers (2024-08-05T06:42:00Z) - DegustaBot: Zero-Shot Visual Preference Estimation for Personalized Multi-Object Rearrangement [53.86523017756224]
We present DegustaBot, an algorithm for visual preference learning that solves household multi-object rearrangement tasks according to personal preference.
We collect a large dataset of naturalistic personal preferences in a simulated table-setting task.
We find that 50% of our model's predictions are likely to be found acceptable by at least 20% of people.
arXiv Detail & Related papers (2024-07-11T21:28:02Z) - Improving Intervention Efficacy via Concept Realignment in Concept Bottleneck Models [57.86303579812877]
Concept Bottleneck Models (CBMs) ground image classification on human-understandable concepts to allow for interpretable model decisions.
Existing approaches often require numerous human interventions per image to achieve strong performances.
We introduce a trainable concept realignment intervention module, which leverages concept relations to realign concept assignments post-intervention.
arXiv Detail & Related papers (2024-05-02T17:59:01Z) - Auxiliary Losses for Learning Generalizable Concept-based Models [5.4066453042367435]
Concept Bottleneck Models (CBMs) have gained popularity since their introduction.
CBMs essentially limit the latent space of a model to human-understandable high-level concepts.
We propose cooperative-Concept Bottleneck Model (coop-CBM) to overcome the performance trade-off.
arXiv Detail & Related papers (2023-11-18T15:50:07Z) - Concept-Centric Transformers: Enhancing Model Interpretability through
Object-Centric Concept Learning within a Shared Global Workspace [1.6574413179773757]
Concept-Centric Transformers is a simple yet effective configuration of the shared global workspace for interpretability.
We show that our model achieves better classification accuracy than all baselines across all problems.
arXiv Detail & Related papers (2023-05-25T06:37:39Z) - Interactive Concept Bottleneck Models [14.240165842615674]
Concept bottleneck models (CBMs) are interpretable neural networks that first predict labels for human-interpretable concepts relevant to the prediction task.
We extend CBMs to interactive prediction settings where the model can query a human collaborator for the label to some concepts.
We develop an interaction policy that, at prediction time, chooses which concepts to request a label for so as to maximally improve the final prediction.
arXiv Detail & Related papers (2022-12-14T11:39:18Z) - Test-time Collective Prediction [73.74982509510961]
Multiple parties in machine learning want to jointly make predictions on future test points.
Agents wish to benefit from the collective expertise of the full set of agents, but may not be willing to release their data or model parameters.
We explore a decentralized mechanism to make collective predictions at test time, leveraging each agent's pre-trained model.
arXiv Detail & Related papers (2021-06-22T18:29:58Z) - Sparse-Interest Network for Sequential Recommendation [78.83064567614656]
We propose a novel textbfSparse textbfInterest textbfNEtwork (SINE) for sequential recommendation.
Our sparse-interest module can adaptively infer a sparse set of concepts for each user from the large concept pool.
SINE can achieve substantial improvement over state-of-the-art methods.
arXiv Detail & Related papers (2021-02-18T11:03:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.