Producer-Fairness in Sequential Bundle Recommendation
- URL: http://arxiv.org/abs/2506.20329v1
- Date: Wed, 25 Jun 2025 11:24:52 GMT
- Title: Producer-Fairness in Sequential Bundle Recommendation
- Authors: Alexandre Rio, Marta Soare, Sihem Amer-Yahia,
- Abstract summary: We formalize producer-fairness, that seeks to achieve desired exposure of different item groups across users in a recommendation session.<n>We propose an exact solution that caters to small instances of our problem.<n>We then examine twos, quality-first and fairness-first, and an adaptive variant that determines on-the-fly the right balance between bundle fairness and quality.
- Score: 62.22091013241362
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We address fairness in the context of sequential bundle recommendation, where users are served in turn with sets of relevant and compatible items. Motivated by real-world scenarios, we formalize producer-fairness, that seeks to achieve desired exposure of different item groups across users in a recommendation session. Our formulation combines naturally with building high quality bundles. Our problem is solved in real time as users arrive. We propose an exact solution that caters to small instances of our problem. We then examine two heuristics, quality-first and fairness-first, and an adaptive variant that determines on-the-fly the right balance between bundle fairness and quality. Our experiments on three real-world datasets underscore the strengths and limitations of each solution and demonstrate their efficacy in providing fair bundle recommendations without compromising bundle quality.
Related papers
- A Reproducibility Study of Product-side Fairness in Bundle Recommendation [50.09508982179837]
We study product-side fairness in bundle recommendation (BR) across three real-world datasets.<n>Our results show that exposure patterns differ notably between bundles and items, revealing the need for fairness interventions.<n>We also find that fairness assessments vary considerably depending on the metric used, reinforcing the need for multi-faceted evaluation.
arXiv Detail & Related papers (2025-07-18T20:06:39Z) - FairSort: Learning to Fair Rank for Personalized Recommendations in Two-Sided Platforms [14.423710571021433]
This paper proposes a re-ranking model FairSort to find a trade-off solution among user-side fairness, provider-side fairness, and personalized recommendations utility.<n>We show that FairSort can ensure more reliable personalized recommendations while considering fairness for both the provider and user.
arXiv Detail & Related papers (2024-11-30T10:30:49Z) - DifFaiRec: Generative Fair Recommender with Conditional Diffusion Model [22.653890395053207]
We propose a novel recommendation algorithm named Diffusion-based Fair Recommender (DifFaiRec) to provide fair recommendations.
DifFaiRec is built upon the conditional diffusion model and hence has a strong ability to learn the distribution of user preferences from their ratings on items.
To guarantee fairness, we design a counterfactual module to reduce the model sensitivity to protected attributes and provide mathematical explanations.
arXiv Detail & Related papers (2024-09-18T07:39:33Z) - Emulating Full Participation: An Effective and Fair Client Selection Strategy for Federated Learning [50.060154488277036]
In federated learning, client selection is a critical problem that significantly impacts both model performance and fairness.<n>We propose two guiding principles that tackle the inherent conflict between the two metrics while reinforcing each other.<n>Our approach adaptively enhances this diversity by selecting clients based on their data distributions, thereby improving both model performance and fairness.
arXiv Detail & Related papers (2024-05-22T12:27:24Z) - Individual Fairness under Varied Notions of Group Fairness in Bipartite Matching - One Framework to Approximate Them All [1.9963683296786414]
We study the assignment of items to platforms that satisfies both group and individual fairness constraints.
Our approach explores a best of both worlds fairness' solution to get a randomized matching.
We present two additional approximation algorithms that users can choose from to balance group fairness and individual fairness trade-offs.
arXiv Detail & Related papers (2022-08-21T19:33:36Z) - A Graph-based Approach for Mitigating Multi-sided Exposure Bias in
Recommender Systems [7.3129791870997085]
We introduce FairMatch, a graph-based algorithm that improves exposure fairness for items and suppliers.
A comprehensive set of experiments on two datasets and comparison with state-of-the-art baselines show that FairMatch, while significantly improves exposure fairness and aggregate diversity, maintains an acceptable level of relevance of the recommendations.
arXiv Detail & Related papers (2021-07-07T18:01:26Z) - Set2setRank: Collaborative Set to Set Ranking for Implicit Feedback
based Recommendation [59.183016033308014]
In this paper, we explore the unique characteristics of the implicit feedback and propose Set2setRank framework for recommendation.
Our proposed framework is model-agnostic and can be easily applied to most recommendation prediction approaches.
arXiv Detail & Related papers (2021-05-16T08:06:22Z) - Beyond Individual and Group Fairness [90.4666341812857]
We present a new data-driven model of fairness that is guided by the unfairness complaints received by the system.
Our model supports multiple fairness criteria and takes into account their potential incompatibilities.
arXiv Detail & Related papers (2020-08-21T14:14:44Z) - SetRank: A Setwise Bayesian Approach for Collaborative Ranking from
Implicit Feedback [50.13745601531148]
We propose a novel setwise Bayesian approach for collaborative ranking, namely SetRank, to accommodate the characteristics of implicit feedback in recommender system.
Specifically, SetRank aims at maximizing the posterior probability of novel setwise preference comparisons.
We also present the theoretical analysis of SetRank to show that the bound of excess risk can be proportional to $sqrtM/N$.
arXiv Detail & Related papers (2020-02-23T06:40:48Z) - Robust Optimization for Fairness with Noisy Protected Groups [85.13255550021495]
We study the consequences of naively relying on noisy protected group labels.
We introduce two new approaches using robust optimization.
We show that the robust approaches achieve better true group fairness guarantees than the naive approach.
arXiv Detail & Related papers (2020-02-21T14:58:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.