GAM Coach: Towards Interactive and User-centered Algorithmic Recourse
- URL: http://arxiv.org/abs/2302.14165v2
- Date: Wed, 1 Mar 2023 01:36:37 GMT
- Title: GAM Coach: Towards Interactive and User-centered Algorithmic Recourse
- Authors: Zijie J. Wang, Jennifer Wortman Vaughan, Rich Caruana, Duen Horng Chau
- Abstract summary: We present GAM Coach, a novel open-source system that adapts integer linear programming to generate customizable counterfactual explanations for Generalized Additive Models (GAMs)
A quantitative user study with 41 participants shows our tool is usable and useful, and users prefer personalized recourse plans over generic plans.
- Score: 28.137254018280576
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning (ML) recourse techniques are increasingly used in
high-stakes domains, providing end users with actions to alter ML predictions,
but they assume ML developers understand what input variables can be changed.
However, a recourse plan's actionability is subjective and unlikely to match
developers' expectations completely. We present GAM Coach, a novel open-source
system that adapts integer linear programming to generate customizable
counterfactual explanations for Generalized Additive Models (GAMs), and
leverages interactive visualizations to enable end users to iteratively
generate recourse plans meeting their needs. A quantitative user study with 41
participants shows our tool is usable and useful, and users prefer personalized
recourse plans over generic plans. Through a log analysis, we explore how users
discover satisfactory recourse plans, and provide empirical evidence that
transparency can lead to more opportunities for everyday users to discover
counterintuitive patterns in ML models. GAM Coach is available at:
https://poloclub.github.io/gam-coach/.
Related papers
- Learning Pluralistic User Preferences through Reinforcement Learning Fine-tuned Summaries [13.187789731783095]
We present a novel framework that learns text-based summaries of each user's preferences, characteristics, and past conversations.<n>These summaries condition the reward model, enabling it to make personalized predictions about the types of responses valued by each user.<n>We show that our method is robust to new users and diverse conversation topics.
arXiv Detail & Related papers (2025-07-17T23:48:51Z) - Interactive Reasoning: Visualizing and Controlling Chain-of-Thought Reasoning in Large Language Models [54.85405423240165]
We introduce Interactive Reasoning, an interaction design that visualizes chain-of-thought outputs as a hierarchy of topics.<n>We implement interactive reasoning in Hippo, a prototype for AI-assisted decision making in the face of uncertain trade-offs.
arXiv Detail & Related papers (2025-06-30T10:00:43Z) - Creating General User Models from Computer Use [62.91116265732001]
This paper presents an architecture for a general user model (GUM) that learns about you by observing any interaction you have with your computer.<n>The GUM takes as input any unstructured observation of a user (e.g., device screenshots) and constructs confidence-weighted propositions that capture user knowledge and preferences.
arXiv Detail & Related papers (2025-05-16T04:00:31Z) - Know Me, Respond to Me: Benchmarking LLMs for Dynamic User Profiling and Personalized Responses at Scale [51.9706400130481]
Large Language Models (LLMs) have emerged as personalized assistants for users across a wide range of tasks.
PERSONAMEM features curated user profiles with over 180 simulated user-LLM interaction histories.
We evaluate LLM chatbots' ability to identify the most suitable response according to the current state of the user's profile.
arXiv Detail & Related papers (2025-04-19T08:16:10Z) - Large Language Model Empowered Recommendation Meets All-domain Continual Pre-Training [60.38082979765664]
CPRec is an All-domain Continual Pre-Training framework for Recommendation.
It holistically align LLMs with universal user behaviors through the continual pre-training paradigm.
We conduct experiments on five real-world datasets from two distinct platforms.
arXiv Detail & Related papers (2025-04-11T20:01:25Z) - XRec: Large Language Models for Explainable Recommendation [5.615321475217167]
We introduce a model-agnostic framework called XRec, which enables Large Language Models to provide explanations for user behaviors in recommender systems.
Our experiments demonstrate XRec's ability to generate comprehensive and meaningful explanations that outperform baseline approaches in explainable recommender systems.
arXiv Detail & Related papers (2024-06-04T14:55:14Z) - RecExplainer: Aligning Large Language Models for Explaining Recommendation Models [50.74181089742969]
Large language models (LLMs) have demonstrated remarkable intelligence in understanding, reasoning, and instruction following.
This paper presents the initial exploration of using LLMs as surrogate models to explain black-box recommender models.
To facilitate an effective alignment, we introduce three methods: behavior alignment, intention alignment, and hybrid alignment.
arXiv Detail & Related papers (2023-11-18T03:05:43Z) - Explainable Active Learning for Preference Elicitation [0.0]
We employ Active Learning (AL) to solve the addressed problem with the objective of maximizing information acquisition with minimal user effort.
AL operates for selecting informative data from a large unlabeled set to inquire an oracle to label them.
It harvests user feedback (given for the system's explanations on the presented items) over informative samples to update an underlying machine learning (ML) model.
arXiv Detail & Related papers (2023-09-01T09:22:33Z) - Evaluating and Explaining Large Language Models for Code Using Syntactic
Structures [74.93762031957883]
This paper introduces ASTxplainer, an explainability method specific to Large Language Models for code.
At its core, ASTxplainer provides an automated method for aligning token predictions with AST nodes.
We perform an empirical evaluation on 12 popular LLMs for code using a curated dataset of the most popular GitHub projects.
arXiv Detail & Related papers (2023-08-07T18:50:57Z) - Exploring Large Language Model for Graph Data Understanding in Online
Job Recommendations [63.19448893196642]
We present a novel framework that harnesses the rich contextual information and semantic representations provided by large language models to analyze behavior graphs.
By leveraging this capability, our framework enables personalized and accurate job recommendations for individual users.
arXiv Detail & Related papers (2023-07-10T11:29:41Z) - Latent User Intent Modeling for Sequential Recommenders [92.66888409973495]
Sequential recommender models learn to predict the next items a user is likely to interact with based on his/her interaction history on the platform.
Most sequential recommenders however lack a higher-level understanding of user intents, which often drive user behaviors online.
Intent modeling is thus critical for understanding users and optimizing long-term user experience.
arXiv Detail & Related papers (2022-11-17T19:00:24Z) - User Driven Model Adjustment via Boolean Rule Explanations [7.814304432499296]
We present a solution which leverages the predictive power of ML models while allowing the user to specify modifications to decision boundaries.
Our interactive overlay approach achieves this goal without requiring model retraining.
We demonstrate that user feedback rules can be layered with the ML predictions to provide immediate changes which in turn supports learning with less data.
arXiv Detail & Related papers (2022-03-28T20:27:02Z) - GAM Changer: Editing Generalized Additive Models with Interactive
Visualization [28.77745864749409]
We present our work, GAM Changer, an open-source interactive system to help data scientists easily and responsibly edit their Generalized Additive Models (GAMs)
With novel visualization techniques, our tool puts interpretability into action -- empowering human users to analyze, validate, and align model behaviors with their knowledge and values.
arXiv Detail & Related papers (2021-12-06T18:51:49Z) - Hyper Meta-Path Contrastive Learning for Multi-Behavior Recommendation [61.114580368455236]
User purchasing prediction with multi-behavior information remains a challenging problem for current recommendation systems.
We propose the concept of hyper meta-path to construct hyper meta-paths or hyper meta-graphs to explicitly illustrate the dependencies among different behaviors of a user.
Thanks to the recent success of graph contrastive learning, we leverage it to learn embeddings of user behavior patterns adaptively instead of assigning a fixed scheme to understand the dependencies among different behaviors.
arXiv Detail & Related papers (2021-09-07T04:28:09Z) - An Information-Theoretic Approach to Personalized Explainable Machine
Learning [92.53970625312665]
We propose a simple probabilistic model for the predictions and user knowledge.
We quantify the effect of an explanation by the conditional mutual information between the explanation and prediction.
arXiv Detail & Related papers (2020-03-01T13:06:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.