Improving Fairness in Adaptive Social Exergames via Shapley Bandits
- URL: http://arxiv.org/abs/2302.09298v2
- Date: Tue, 21 Feb 2023 14:36:14 GMT
- Title: Improving Fairness in Adaptive Social Exergames via Shapley Bandits
- Authors: Robert C. Gray, Jennifer Villareale, Thomas B. Fox, Diane H. Dallal,
Santiago Onta\~n\'on, Danielle Arigo, Shahin Jabbari, Jichen Zhu
- Abstract summary: We propose a new type of fairness-aware multi-armed bandit, Shapley Bandits.
It uses the Shapley Value for increasing overall player participation and intervention adherence rather than the total group output.
Our results indicate that our Shapley Bandits effectively mediates the Greedy Bandit Problem and achieves better user retention and motivation across the participants.
- Score: 7.215807283769683
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Algorithmic fairness is an essential requirement as AI becomes integrated in
society. In the case of social applications where AI distributes resources,
algorithms often must make decisions that will benefit a subset of users,
sometimes repeatedly or exclusively, while attempting to maximize specific
outcomes. How should we design such systems to serve users more fairly? This
paper explores this question in the case where a group of users works toward a
shared goal in a social exergame called Step Heroes. We identify adverse
outcomes in traditional multi-armed bandits (MABs) and formalize the Greedy
Bandit Problem. We then propose a solution based on a new type of
fairness-aware multi-armed bandit, Shapley Bandits. It uses the Shapley Value
for increasing overall player participation and intervention adherence rather
than the maximization of total group output, which is traditionally achieved by
favoring only high-performing participants. We evaluate our approach via a user
study (n=46). Our results indicate that our Shapley Bandits effectively
mediates the Greedy Bandit Problem and achieves better user retention and
motivation across the participants.
Related papers
- Neural Dueling Bandits [58.90189511247936]
We use a neural network to estimate the reward function using preference feedback for the previously selected arms.
We then extend our theoretical results to contextual bandit problems with binary feedback, which is in itself a non-trivial contribution.
arXiv Detail & Related papers (2024-07-24T09:23:22Z) - $\alpha$-Fair Contextual Bandits [10.74025233418392]
Contextual bandit algorithms are at the core of many applications, including recommender systems, clinical trials, and optimal portfolio selection.
One of the most popular problems studied in the contextual bandit literature is to maximize the sum of the rewards in each round.
In this paper, we consider the $alpha$-Fairtextual Con Bandits problem, where the objective is to maximize the global $alpha$-fair utility function.
arXiv Detail & Related papers (2023-10-22T03:42:59Z) - Equitable Restless Multi-Armed Bandits: A General Framework Inspired By
Digital Health [23.762981395335217]
Restless multi-armed bandits (RMABs) are a popular framework for algorithmic decision making in sequential settings with limited resources.
RMABs are increasingly being used for sensitive decisions such as in public health, treatment scheduling, anti-poaching, and -- the motivation for this work -- digital health.
We study equitable objectives for RMABs for the first time. We consider two equity-aligned objectives from the fairness literature, minimax reward and max Nash welfare.
We develop efficient algorithms for solving each -- a water filling algorithm for the former, and a greedy algorithm with theoretically motivated nuance to balance disparate group sizes
arXiv Detail & Related papers (2023-08-17T13:00:27Z) - Bandit Social Learning: Exploration under Myopic Behavior [58.75758600464338]
We study social learning dynamics motivated by reviews on online platforms.
Agents collectively follow a simple multi-armed bandit protocol, but each agent acts myopically, without regards to exploration.
We derive stark learning failures for any such behavior, and provide matching positive results.
arXiv Detail & Related papers (2023-02-15T01:57:57Z) - Incentivizing Combinatorial Bandit Exploration [87.08827496301839]
Consider a bandit algorithm that recommends actions to self-interested users in a recommendation system.
Users are free to choose other actions and need to be incentivized to follow the algorithm's recommendations.
While the users prefer to exploit, the algorithm can incentivize them to explore by leveraging the information collected from the previous users.
arXiv Detail & Related papers (2022-06-01T13:46:25Z) - Learning Equilibria in Matching Markets from Bandit Feedback [139.29934476625488]
We develop a framework and algorithms for learning stable market outcomes under uncertainty.
Our work takes a first step toward elucidating when and how stable matchings arise in large, data-driven marketplaces.
arXiv Detail & Related papers (2021-08-19T17:59:28Z) - Adaptive Algorithms for Multi-armed Bandit with Composite and Anonymous
Feedback [32.62857394584907]
We study the multi-armed bandit (MAB) problem with composite and anonymous feedback.
We propose adaptive algorithms for both the adversarial and non- adversarial cases.
arXiv Detail & Related papers (2020-12-13T12:25:41Z) - Bandits Under The Influence (Extended Version) [14.829802725813868]
We present online recommendation algorithms rooted in the linear multi-armed bandit literature.
Our bandit algorithms are tailored precisely to recommendation scenarios where user interests evolve under social influence.
arXiv Detail & Related papers (2020-09-21T19:02:00Z) - Partial Bandit and Semi-Bandit: Making the Most Out of Scarce Users'
Feedback [62.997667081978825]
We present a novel approach for considering user feedback and evaluate it using three distinct strategies.
Despite a limited number of feedbacks returned by users (as low as 20% of the total), our approach obtains similar results to those of state of the art approaches.
arXiv Detail & Related papers (2020-09-16T07:32:51Z) - Competing Bandits: The Perils of Exploration Under Competition [99.68537519404727]
We study the interplay between exploration and competition on online platforms.
We find that stark competition induces firms to commit to a "greedy" bandit algorithm that leads to low welfare.
We investigate two channels for weakening the competition: relaxing the rationality of users and giving one firm a first-mover advantage.
arXiv Detail & Related papers (2020-07-20T14:19:08Z) - Selfish Robustness and Equilibria in Multi-Player Bandits [25.67398941667429]
In a game, several players simultaneously pull arms and encounter a collision - with 0 reward - if some of them pull the same arm at the same time.
While the cooperative case where players maximize the collective reward has been mostly considered, to malicious players is a crucial and challenging concern.
We shall consider instead the more natural class of selfish players whose incentives are to maximize their individual rewards, potentially at the expense of the social welfare.
arXiv Detail & Related papers (2020-02-04T09:50:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.