Transparency, Privacy, and Fairness in Recommender Systems
- URL: http://arxiv.org/abs/2406.11323v2
- Date: Fri, 28 Jun 2024 10:43:01 GMT
- Title: Transparency, Privacy, and Fairness in Recommender Systems
- Authors: Dominik Kowald,
- Abstract summary: This habilitation elaborates on aspects related to (i) transparency and cognitive models, (ii) privacy and limited preference information, and (iii) fairness and popularity bias in recommender systems.
- Score: 0.19036571490366497
- License:
- Abstract: Recommender systems have become a pervasive part of our daily online experience, and are one of the most widely used applications of artificial intelligence and machine learning. Therefore, regulations and requirements for trustworthy artificial intelligence, for example, the European AI Act, which includes notions such as transparency, privacy, and fairness are also highly relevant for the design of recommender systems in practice. This habilitation elaborates on aspects related to these three notions in the light of recommender systems, namely: (i) transparency and cognitive models, (ii) privacy and limited preference information, and (iii) fairness and popularity bias in recommender systems. Specifically, with respect to aspect (i), we highlight the usefulness of incorporating psychological theories for a transparent design process of recommender systems. We term this type of systems psychology-informed recommender systems. In aspect (ii), we study and address the trade-off between accuracy and privacy in differentially-private recommendations. We design a novel recommendation approach for collaborative filtering based on an efficient neighborhood reuse concept, which reduces the number of users that need to be protected with differential privacy. Furthermore, we address the related issue of limited availability of user preference information, e.g., click data, in the settings of session-based and cold-start recommendations. With respect to aspect (iii), we analyze popularity bias in recommender systems. We find that the recommendation frequency of an item is positively correlated with this item's popularity. This also leads to the unfair treatment of users with little interest in popular content. Finally, we study long-term fairness dynamics in algorithmic decision support in the labor market using agent-based modeling techniques.
Related papers
- A Deep Dive into Fairness, Bias, Threats, and Privacy in Recommender Systems: Insights and Future Research [45.86892639035389]
This study explores fairness, bias, threats, and privacy in recommender systems.
It examines how algorithmic decisions can unintentionally reinforce biases or marginalize specific user and item groups.
The study suggests future research directions to improve recommender systems' robustness, fairness, and privacy.
arXiv Detail & Related papers (2024-09-19T11:00:35Z) - The Fault in Our Recommendations: On the Perils of Optimizing the Measurable [2.6217304977339473]
We show that optimizing for engagement can lead to significant utility losses.
We propose a utility-aware policy that initially recommends a mix of popular and niche content.
arXiv Detail & Related papers (2024-05-07T02:12:17Z) - User Consented Federated Recommender System Against Personalized
Attribute Inference Attack [55.24441467292359]
We propose a user-consented federated recommendation system (UC-FedRec) to flexibly satisfy the different privacy needs of users.
UC-FedRec allows users to self-define their privacy preferences to meet various demands and makes recommendations with user consent.
arXiv Detail & Related papers (2023-12-23T09:44:57Z) - Neural Contextual Bandits for Personalized Recommendation [49.85090929163639]
This tutorial investigates the contextual bandits as a powerful framework for personalized recommendations.
We focus on the exploration perspective of contextual bandits to alleviate the Matthew Effect'' in recommender systems.
In addition to the conventional linear contextual bandits, we will also dedicated to neural contextual bandits.
arXiv Detail & Related papers (2023-12-21T17:03:26Z) - A Survey on Fairness-aware Recommender Systems [59.23208133653637]
We present concepts of fairness in different recommendation scenarios, comprehensively categorize current advances, and introduce typical methods to promote fairness in different stages of recommender systems.
Next, we delve into the significant influence that fairness-aware recommender systems exert on real-world industrial applications.
arXiv Detail & Related papers (2023-06-01T07:08:22Z) - A Comprehensive Survey on Trustworthy Recommender Systems [32.523177842969915]
We provide a comprehensive overview of Trustworthy Recommender systems (TRec) with a specific focus on six of the most important aspects.
For each aspect, we summarize the recent related technologies and discuss potential research directions to help achieve trustworthy recommender systems.
arXiv Detail & Related papers (2022-09-21T04:34:17Z) - Fairness and Transparency in Recommendation: The Users' Perspective [14.830700792215849]
We discuss user perspectives of fairness-aware recommender systems.
We propose three features that could improve user understanding of and trust in fairness-aware recommender systems.
arXiv Detail & Related papers (2021-03-16T00:42:09Z) - Fairness-Aware Explainable Recommendation over Knowledge Graphs [73.81994676695346]
We analyze different groups of users according to their level of activity, and find that bias exists in recommendation performance between different groups.
We show that inactive users may be more susceptible to receiving unsatisfactory recommendations, due to insufficient training data for the inactive users.
We propose a fairness constrained approach via re-ranking to mitigate this problem in the context of explainable recommendation over knowledge graphs.
arXiv Detail & Related papers (2020-06-03T05:04:38Z) - Survey for Trust-aware Recommender Systems: A Deep Learning Perspective [48.2733163413522]
It becomes critical to embrace a trustworthy recommender system.
This survey provides a systemic summary of three categories of trust-aware recommender systems.
arXiv Detail & Related papers (2020-04-08T02:11:55Z) - Exploring User Opinions of Fairness in Recommender Systems [13.749884072907163]
We ask users what their ideas of fair treatment in recommendation might be.
We analyze what might cause discrepancies or changes between user's opinions towards fairness.
arXiv Detail & Related papers (2020-03-13T19:44:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.