Achieving User-Side Fairness in Contextual Bandits
- URL: http://arxiv.org/abs/2010.12102v1
- Date: Thu, 22 Oct 2020 22:58:25 GMT
- Title: Achieving User-Side Fairness in Contextual Bandits
- Authors: Wen Huang and Kevin Labille and Xintao Wu and Dongwon Lee and Neil
Heffernan
- Abstract summary: We study how to achieve user-side fairness in personalized recommendation.
We formulate our fair personalized recommendation as a modified contextual bandit.
We develop a fair contextual bandit algorithm, Fair-LinUCB, that improves upon the traditional LinUCB algorithm.
- Score: 17.947543703195738
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Personalized recommendation based on multi-arm bandit (MAB) algorithms has
shown to lead to high utility and efficiency as it can dynamically adapt the
recommendation strategy based on feedback. However, unfairness could incur in
personalized recommendation. In this paper, we study how to achieve user-side
fairness in personalized recommendation. We formulate our fair personalized
recommendation as a modified contextual bandit and focus on achieving fairness
on the individual whom is being recommended an item as opposed to achieving
fairness on the items that are being recommended. We introduce and define a
metric that captures the fairness in terms of rewards received for both the
privileged and protected groups. We develop a fair contextual bandit algorithm,
Fair-LinUCB, that improves upon the traditional LinUCB algorithm to achieve
group-level fairness of users. Our algorithm detects and monitors unfairness
while it learns to recommend personalized videos to students to achieve high
efficiency. We provide a theoretical regret analysis and show that our
algorithm has a slightly higher regret bound than LinUCB. We conduct numerous
experimental evaluations to compare the performances of our fair contextual
bandit to that of LinUCB and show that our approach achieves group-level
fairness while maintaining a high utility.
Related papers
- FairDgcl: Fairness-aware Recommendation with Dynamic Graph Contrastive Learning [48.38344934125999]
We study how to implement high-quality data augmentation to improve recommendation fairness.
Specifically, we propose FairDgcl, a dynamic graph adversarial contrastive learning framework.
We show that FairDgcl can simultaneously generate enhanced representations that possess both fairness and accuracy.
arXiv Detail & Related papers (2024-10-23T04:43:03Z) - Is ChatGPT Fair for Recommendation? Evaluating Fairness in Large
Language Model Recommendation [52.62492168507781]
We propose a novel benchmark called Fairness of Recommendation via LLM (FaiRLLM)
This benchmark comprises carefully crafted metrics and a dataset that accounts for eight sensitive attributes.
By utilizing our FaiRLLM benchmark, we conducted an evaluation of ChatGPT and discovered that it still exhibits unfairness to some sensitive attributes when generating recommendations.
arXiv Detail & Related papers (2023-05-12T16:54:36Z) - Improving Recommendation Fairness via Data Augmentation [66.4071365614835]
Collaborative filtering based recommendation learns users' preferences from all users' historical behavior data, and has been popular to facilitate decision making.
A recommender system is considered unfair when it does not perform equally well for different user groups according to users' sensitive attributes.
In this paper, we study how to improve recommendation fairness from the data augmentation perspective.
arXiv Detail & Related papers (2023-02-13T13:11:46Z) - Equal Experience in Recommender Systems [21.298427869586686]
We introduce a novel fairness notion (that we call equal experience) to regulate unfairness in the presence of biased data.
We propose an optimization framework that incorporates the fairness notion as a regularization term, as well as introduce computationally-efficient algorithms that solve the optimization.
arXiv Detail & Related papers (2022-10-12T05:53:05Z) - Experiments on Generalizability of User-Oriented Fairness in Recommender
Systems [2.0932879442844476]
A fairness-aware recommender system aims to treat different user groups similarly.
We propose a user-centered fairness re-ranking framework applied on top of a base ranking model.
We evaluate the final recommendations provided by the re-ranking framework from both user- (e.g., NDCG) and item-side (e.g., novelty, item-fairness) metrics.
arXiv Detail & Related papers (2022-05-17T12:36:30Z) - Achieving Counterfactual Fairness for Causal Bandit [18.077963117600785]
We study how to recommend an item at each step to maximize the expected reward.
We then propose the fair causal bandit (F-UCB) for achieving the counterfactual individual fairness.
arXiv Detail & Related papers (2021-09-21T23:44:48Z) - Balancing Accuracy and Fairness for Interactive Recommendation with
Reinforcement Learning [68.25805655688876]
Fairness in recommendation has attracted increasing attention due to bias and discrimination possibly caused by traditional recommenders.
We propose a reinforcement learning based framework, FairRec, to dynamically maintain a long-term balance between accuracy and fairness in IRS.
Extensive experiments validate that FairRec can improve fairness, while preserving good recommendation quality.
arXiv Detail & Related papers (2021-06-25T02:02:51Z) - Towards Personalized Fairness based on Causal Notion [18.5897206797918]
We introduce a framework for achieving counterfactually fair recommendations through adversary learning.
Our method can generate fairer recommendations for users with a desirable recommendation performance.
arXiv Detail & Related papers (2021-05-20T15:24:34Z) - DeepFair: Deep Learning for Improving Fairness in Recommender Systems [63.732639864601914]
The lack of bias management in Recommender Systems leads to minority groups receiving unfair recommendations.
We propose a Deep Learning based Collaborative Filtering algorithm that provides recommendations with an optimum balance between fairness and accuracy without knowing demographic information about the users.
arXiv Detail & Related papers (2020-06-09T13:39:38Z) - Fairness-Aware Explainable Recommendation over Knowledge Graphs [73.81994676695346]
We analyze different groups of users according to their level of activity, and find that bias exists in recommendation performance between different groups.
We show that inactive users may be more susceptible to receiving unsatisfactory recommendations, due to insufficient training data for the inactive users.
We propose a fairness constrained approach via re-ranking to mitigate this problem in the context of explainable recommendation over knowledge graphs.
arXiv Detail & Related papers (2020-06-03T05:04:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.