Low-Cost Algorithmic Recourse for Users With Uncertain Cost Functions
- URL: http://arxiv.org/abs/2111.01235v1
- Date: Mon, 1 Nov 2021 19:49:35 GMT
- Title: Low-Cost Algorithmic Recourse for Users With Uncertain Cost Functions
- Authors: Prateek Yadav, Peter Hase, Mohit Bansal
- Abstract summary: We formalize the notion of user-specific cost functions and introduce a new method for identifying actionable recourses for users.
Our method satisfies up to 25.89 percentage points more users compared to strong baseline methods.
- Score: 74.00030431081751
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The problem of identifying algorithmic recourse for people affected by
machine learning model decisions has received much attention recently. Some
recent works model user-incurred cost, which is directly linked to user
satisfaction. But they assume a single global cost function that is shared
across all users. This is an unrealistic assumption when users have dissimilar
preferences about their willingness to act upon a feature and different costs
associated with changing that feature. In this work, we formalize the notion of
user-specific cost functions and introduce a new method for identifying
actionable recourses for users. By default, we assume that users' cost
functions are hidden from the recourse method, though our framework allows
users to partially or completely specify their preferences or cost function. We
propose an objective function, Expected Minimum Cost (EMC), based on two key
ideas: (1) when presenting a set of options to a user, it is vital that there
is at least one low-cost solution the user could adopt; (2) when we do not know
the user's true cost function, we can approximately optimize for user
satisfaction by first sampling plausible cost functions, then finding a set
that achieves a good cost for the user in expectation. We optimize EMC with a
novel discrete optimization algorithm, Cost-Optimized Local Search (COLS),
which is guaranteed to improve the recourse set quality over iterations.
Experimental evaluation on popular real-world datasets with simulated user
costs demonstrates that our method satisfies up to 25.89 percentage points more
users compared to strong baseline methods. Using standard fairness metrics, we
also show that our method can provide more fair solutions across demographic
groups than comparable methods, and we verify that our method is robust to
misspecification of the cost function distribution.
Related papers
- Quantifying User Coherence: A Unified Framework for Cross-Domain Recommendation Analysis [69.37718774071793]
This paper introduces novel information-theoretic measures for understanding recommender systems.
We evaluate 7 recommendation algorithms across 9 datasets, revealing the relationships between our measures and standard performance metrics.
arXiv Detail & Related papers (2024-10-03T13:02:07Z) - Learning Recourse Costs from Pairwise Feature Comparisons [22.629956883958076]
This paper presents a novel technique for incorporating user input when learning and inferring user preferences.
We propose the use of the Bradley-Terry model to automatically infer feature-wise costs using non-exhaustive human comparison surveys.
We demonstrate the efficient learning of individual feature costs using MAP estimates, and show that these non-exhaustive human surveys are sufficient to learn an exhaustive set of feature costs.
arXiv Detail & Related papers (2024-09-20T23:04:08Z) - User-Level Differential Privacy With Few Examples Per User [73.81862394073308]
We consider the example-scarce regime, where each user has only a few examples, and obtain the following results.
For approximate-DP, we give a generic transformation of any item-level DP algorithm to a user-level DP algorithm.
We present a simple technique for adapting the exponential mechanism [McSherry, Talwar FOCS 2007] to the user-level setting.
arXiv Detail & Related papers (2023-09-21T21:51:55Z) - Towards User Guided Actionable Recourse [5.669106489320257]
Actionable Recourse (AR) describes recommendations of cost-efficient changes to a user's actionable features to help them obtain favorable outcomes.
We propose a gradient-based approach to identify User Preferred Actionable Recourse (UP-AR)
arXiv Detail & Related papers (2023-09-05T18:06:09Z) - Eliciting User Preferences for Personalized Multi-Objective Decision
Making through Comparative Feedback [76.7007545844273]
We propose a multi-objective decision making framework that accommodates different user preferences over objectives.
Our model consists of a Markov decision process with a vector-valued reward function, with each user having an unknown preference vector.
We suggest an algorithm that finds a nearly optimal policy for the user using a small number of comparison queries.
arXiv Detail & Related papers (2023-02-07T23:58:19Z) - Personalized Algorithmic Recourse with Preference Elicitation [20.78332455864586]
We introduce PEAR, the first human-in-the-loop approach capable of providing personalized algorithmic recourse tailored to the needs of any end-user.
PEAR builds on insights from Bayesian Preference Elicitation to iteratively refine an estimate of the costs of actions by asking choice set queries to the target user.
Our empirical evaluation on real-world datasets highlights how PEAR produces high-quality personalized recourse in only a handful of iterations.
arXiv Detail & Related papers (2022-05-27T03:12:18Z) - Multi-Step Budgeted Bayesian Optimization with Unknown Evaluation Costs [28.254408148839644]
We propose a non-myopic acquisition function that generalizes classical expected improvement to the setting of heterogeneous evaluation costs.
Our acquisition function outperforms existing methods in a variety of synthetic and real problems.
arXiv Detail & Related papers (2021-11-12T02:18:26Z) - Linear Speedup in Personalized Collaborative Learning [69.45124829480106]
Personalization in federated learning can improve the accuracy of a model for a user by trading off the model's bias.
We formalize the personalized collaborative learning problem as optimization of a user's objective.
We explore conditions under which we can optimally trade-off their bias for a reduction in variance.
arXiv Detail & Related papers (2021-11-10T22:12:52Z) - Learning with User-Level Privacy [61.62978104304273]
We analyze algorithms to solve a range of learning tasks under user-level differential privacy constraints.
Rather than guaranteeing only the privacy of individual samples, user-level DP protects a user's entire contribution.
We derive an algorithm that privately answers a sequence of $K$ adaptively chosen queries with privacy cost proportional to $tau$, and apply it to solve the learning tasks we consider.
arXiv Detail & Related papers (2021-02-23T18:25:13Z) - Active Preference Learning using Maximum Regret [10.317601896290467]
We study active preference learning as a framework for intuitively specifying the behaviour of autonomous robots.
In active preference learning, a user chooses the preferred behaviour from a set of alternatives, from which the robot learns the user's preferences.
arXiv Detail & Related papers (2020-05-08T14:31:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.