To Recommend or Not? A Model-Based Comparison of Item-Matching Processes
- URL: http://arxiv.org/abs/2110.11468v1
- Date: Thu, 21 Oct 2021 20:37:56 GMT
- Title: To Recommend or Not? A Model-Based Comparison of Item-Matching Processes
- Authors: Serina Chang and Johan Ugander
- Abstract summary: recommender systems are central to modern online platforms, but a popular concern is that they may be pulling society in dangerous directions.
We take a model-based approach to this challenge, introducing a dichotomy of process models that we can compare.
Our key finding is that the recommender and organic models result in dramatically different outcomes at both the individual and societal level.
- Score: 7.636113901205644
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recommender systems are central to modern online platforms, but a popular
concern is that they may be pulling society in dangerous directions (e.g.,
towards filter bubbles). However, a challenge with measuring the effects of
recommender systems is how to compare user outcomes under these systems to
outcomes under a credible counterfactual world without such systems. We take a
model-based approach to this challenge, introducing a dichotomy of process
models that we can compare: (1) a "recommender" model describing a generic
item-matching process under a personalized recommender system and (2) an
"organic" model describing a baseline counterfactual where users search for
items without the mediation of any system. Our key finding is that the
recommender and organic models result in dramatically different outcomes at
both the individual and societal level, as supported by theorems and simulation
experiments with real data. The two process models also induce different
trade-offs during inference, where standard performance-improving techniques
such as regularization/shrinkage have divergent effects. Shrinkage improves the
mean squared error of matches in both settings, as expected, but at the cost of
less diverse (less radical) items chosen in the recommender model but more
diverse (more radical) items chosen in the organic model. These findings
provide a formal language for how recommender systems may be fundamentally
altering how we search for and interact with content, in a world increasingly
mediated by such systems.
Related papers
- Be Aware of the Neighborhood Effect: Modeling Selection Bias under Interference [50.95521705711802]
Previous studies have focused on addressing selection bias to achieve unbiased learning of the prediction model.
This paper formally formulates the neighborhood effect as an interference problem from the perspective of causal inference.
We propose a novel ideal loss that can be used to deal with selection bias in the presence of neighborhood effect.
arXiv Detail & Related papers (2024-04-30T15:20:41Z) - Break Out of a Pigeonhole: A Unified Framework for Examining
Miscalibration, Bias, and Stereotype in Recommender Systems [6.209548319476692]
This study aims to characterize the systematic errors of a recommendation system and how they manifest in various accountability issues.
We propose a unified framework that distinguishes the sources of prediction errors into a set of key measures that quantify the various types of system-induced effects.
Our research is the first systematic examination of not only system-induced effects and miscalibration but also the stereotyping issue in recommender systems.
arXiv Detail & Related papers (2023-12-29T02:32:12Z) - Managing multi-facet bias in collaborative filtering recommender systems [0.0]
Biased recommendations across groups of items can endanger the interests of item providers along with causing user dissatisfaction with the system.
This study aims to manage a new type of intersectional bias regarding the geographical origin and popularity of items in the output of state-of-the-art collaborative filtering recommender algorithms.
Extensive experiments on two real-world datasets of movies and books, enriched with the items' continents of production, show that the proposed algorithm strikes a reasonable balance between accuracy and both types of the mentioned biases.
arXiv Detail & Related papers (2023-02-21T10:06:01Z) - In Search of Insights, Not Magic Bullets: Towards Demystification of the
Model Selection Dilemma in Heterogeneous Treatment Effect Estimation [92.51773744318119]
This paper empirically investigates the strengths and weaknesses of different model selection criteria.
We highlight that there is a complex interplay between selection strategies, candidate estimators and the data used for comparing them.
arXiv Detail & Related papers (2023-02-06T16:55:37Z) - A Recommendation Approach based on Similarity-Popularity Models of
Complex Networks [1.385805101975528]
This work proposes a novel recommendation method based on complex networks generated by a similarity-popularity model to predict ones.
We first construct a model of a network having users and items as nodes from observed ratings and then use it to predict unseen ratings.
The proposed approach is implemented and experimentally compared against baseline and state-of-the-art recommendation methods on 21 datasets from various domains.
arXiv Detail & Related papers (2022-09-29T11:00:06Z) - What are the best systems? New perspectives on NLP Benchmarking [10.27421161397197]
We propose a new procedure to rank systems based on their performance across different tasks.
Motivated by the social choice theory, the final system ordering is obtained through aggregating the rankings induced by each task.
We show that our method yields different conclusions on state-of-the-art systems than the mean-aggregation procedure.
arXiv Detail & Related papers (2022-02-08T11:44:20Z) - Utilizing Textual Reviews in Latent Factor Models for Recommender
Systems [1.7361353199214251]
We propose a recommender algorithm that combines a rating modelling technique with a topic modelling method based on textual reviews.
We evaluate the performance of the algorithm using Amazon.com datasets with different sizes, corresponding to 23 product categories.
arXiv Detail & Related papers (2021-11-16T15:07:51Z) - Deep Variational Models for Collaborative Filtering-based Recommender
Systems [63.995130144110156]
Deep learning provides accurate collaborative filtering models to improve recommender system results.
Our proposed models apply the variational concept to injectity in the latent space of the deep architecture.
Results show the superiority of the proposed approach in scenarios where the variational enrichment exceeds the injected noise effect.
arXiv Detail & Related papers (2021-07-27T08:59:39Z) - Characterizing Fairness Over the Set of Good Models Under Selective
Labels [69.64662540443162]
We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance.
We provide tractable algorithms to compute the range of attainable group-level predictive disparities.
We extend our framework to address the empirically relevant challenge of selectively labelled data.
arXiv Detail & Related papers (2021-01-02T02:11:37Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z) - PONE: A Novel Automatic Evaluation Metric for Open-Domain Generative
Dialogue Systems [48.99561874529323]
There are three kinds of automatic methods to evaluate the open-domain generative dialogue systems.
Due to the lack of systematic comparison, it is not clear which kind of metrics are more effective.
We propose a novel and feasible learning-based metric that can significantly improve the correlation with human judgments.
arXiv Detail & Related papers (2020-04-06T04:36:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.