Recommending With, Not For: Co-Designing Recommender Systems for Social Good
- URL: http://arxiv.org/abs/2508.03792v1
- Date: Tue, 05 Aug 2025 17:50:39 GMT
- Title: Recommending With, Not For: Co-Designing Recommender Systems for Social Good
- Authors: Michael D. Ekstrand, Afsaneh Razi, Aleksandra Sarcevic, Maria Soledad Pera, Robin Burke, Katherine Landau Wright,
- Abstract summary: We argue that recommender systems aimed at improving social good should be designed *by* and *with*, not just *for* the people who will experience their benefits and harms.<n> recommender systems should be designed in collaboration with their users, creators, and other stakeholders as full co-designers.
- Score: 46.03400472698948
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Recommender systems are usually designed by engineers, researchers, designers, and other members of development teams. These systems are then evaluated based on goals set by the aforementioned teams and other business units of the platforms operating the recommender systems. This design approach emphasizes the designers' vision for how the system can best serve the interests of users, providers, businesses, and other stakeholders. Although designers may be well-informed about user needs through user experience and market research, they are still the arbiters of the system's design and evaluation, with other stakeholders' interests less emphasized in user-centered design and evaluation. When extended to recommender systems for social good, this approach results in systems that reflect the social objectives as envisioned by the designers and evaluated as the designers understand them. Instead, social goals and operationalizations should be developed through participatory and democratic processes that are accountable to their stakeholders. We argue that recommender systems aimed at improving social good should be designed *by* and *with*, not just *for*, the people who will experience their benefits and harms. That is, they should be designed in collaboration with their users, creators, and other stakeholders as full co-designers, not only as user study participants.
Related papers
- RecGPT Technical Report [57.84251629878726]
We propose RecGPT, a next-generation framework that places user intent at the center of the recommendation pipeline.<n> RecGPT integrates large language models into key stages of user interest mining, item retrieval, and explanation generation.<n>Online experiments demonstrate that RecGPT achieves consistent performance gains across stakeholders.
arXiv Detail & Related papers (2025-07-30T17:55:06Z) - De-centering the (Traditional) User: Multistakeholder Evaluation of Recommender Systems [10.731079374109596]
We focus our discussion on the challenges of multistakeholder evaluation of recommender systems.<n>We discuss how to move from theoretical principles to practical implementation.<n>We aim to provide guidance to researchers and practitioners about incorporating these complex and domain-dependent issues of evaluation.
arXiv Detail & Related papers (2025-01-09T11:44:49Z) - The 1st Workshop on Human-Centered Recommender Systems [27.23807230278776]
This workshop aims to provide a platform for researchers to explore the development of Human-Centered Recommender Systems.
HCRS refers to the creation of recommender systems that prioritize human needs, values, and capabilities at the core of their design and operation.
In this workshop, topics will include, but are not limited to, robustness, privacy, transparency, fairness, diversity, accountability, ethical considerations, and user-friendly design.
arXiv Detail & Related papers (2024-11-22T06:46:41Z) - Rethinking the Evaluation of Dialogue Systems: Effects of User Feedback on Crowdworkers and LLMs [57.16442740983528]
In ad-hoc retrieval, evaluation relies heavily on user actions, including implicit feedback.
The role of user feedback in annotators' assessment of turns in a conversational perception has been little studied.
We focus on how the evaluation of task-oriented dialogue systems ( TDSs) is affected by considering user feedback, explicit or implicit, as provided through the follow-up utterance of a turn being evaluated.
arXiv Detail & Related papers (2024-04-19T16:45:50Z) - A Comprehensive Survey of Evaluation Techniques for Recommendation
Systems [0.0]
This paper introduces a comprehensive suite of metrics, each tailored to capture a distinct aspect of system performance.
We identify the strengths and limitations of current evaluation practices and highlight the nuanced trade-offs that emerge when optimizing recommendation systems across different metrics.
arXiv Detail & Related papers (2023-12-26T11:57:01Z) - A Survey on Fairness-aware Recommender Systems [59.23208133653637]
We present concepts of fairness in different recommendation scenarios, comprehensively categorize current advances, and introduce typical methods to promote fairness in different stages of recommender systems.
Next, we delve into the significant influence that fairness-aware recommender systems exert on real-world industrial applications.
arXiv Detail & Related papers (2023-06-01T07:08:22Z) - Fairness and Transparency in Recommendation: The Users' Perspective [14.830700792215849]
We discuss user perspectives of fairness-aware recommender systems.
We propose three features that could improve user understanding of and trust in fairness-aware recommender systems.
arXiv Detail & Related papers (2021-03-16T00:42:09Z) - Designing Disaggregated Evaluations of AI Systems: Choices,
Considerations, and Tradeoffs [42.401239658653914]
We argue that a deeper understanding of the choices, considerations, and tradeoffs involved in designing disaggregated evaluations will better enable researchers, practitioners, and the public to understand the ways in which AI systems may be underperforming for particular groups of people.
arXiv Detail & Related papers (2021-03-10T14:26:14Z) - MARS-Gym: A Gym framework to model, train, and evaluate Recommender
Systems for Marketplaces [51.123916699062384]
MARS-Gym is an open-source framework to build and evaluate Reinforcement Learning agents for recommendations in marketplaces.
We provide the implementation of a diverse set of baseline agents, with a metrics-driven analysis of them in the Trivago marketplace dataset.
We expect to bridge the gap between academic research and production systems, as well as to facilitate the design of new algorithms and applications.
arXiv Detail & Related papers (2020-09-30T16:39:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.