Content-Agnostic Moderation for Stance-Neutral Recommendation
- URL: http://arxiv.org/abs/2405.18941v1
- Date: Wed, 29 May 2024 09:50:39 GMT
- Title: Content-Agnostic Moderation for Stance-Neutral Recommendation
- Authors: Nan Li, Bo Kang, Tijl De Bie,
- Abstract summary: Content-agnostic moderation does not rely on the actual content being moderated, arguably making it less prone to forms of censorship.
We introduce two novel content-agnostic moderation methods that modify the recommendations from the content recommender to disperse user-item co-clusters without relying on content features.
Our results indicate that achieving stance neutrality without direct content information is not only feasible but can also help in developing more balanced and informative recommendation systems without substantially degrading user engagement.
- Score: 13.210645250173997
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Personalized recommendation systems often drive users towards more extreme content, exacerbating opinion polarization. While (content-aware) moderation has been proposed to mitigate these effects, such approaches risk curtailing the freedom of speech and of information. To address this concern, we propose and explore the feasibility of \emph{content-agnostic} moderation as an alternative approach for reducing polarization. Content-agnostic moderation does not rely on the actual content being moderated, arguably making it less prone to forms of censorship. We establish theoretically that content-agnostic moderation cannot be guaranteed to work in a fully generic setting. However, we show that it can often be effectively achieved in practice with plausible assumptions. We introduce two novel content-agnostic moderation methods that modify the recommendations from the content recommender to disperse user-item co-clusters without relying on content features. To evaluate the potential of content-agnostic moderation in controlled experiments, we built a simulation environment to analyze the closed-loop behavior of a system with a given set of users, recommendation system, and moderation approach. Through comprehensive experiments in this environment, we show that our proposed moderation methods significantly enhance stance neutrality and maintain high recommendation quality across various data scenarios. Our results indicate that achieving stance neutrality without direct content information is not only feasible but can also help in developing more balanced and informative recommendation systems without substantially degrading user engagement.
Related papers
- You Don't Bring Me Flowers: Mitigating Unwanted Recommendations Through Conformal Risk Control [17.919072158085754]
This paper introduces an intuitive, model-agnostic, and distribution-free method that uses conformal risk control to provably bound unwanted content in personalized recommendations.<n>Our approach ensures an effective and controllable reduction of unwanted recommendations with minimal effort.
arXiv Detail & Related papers (2025-07-09T21:27:35Z) - The Feedback Loop Between Recommendation Systems and Reactive Users [6.660458629649826]
We model the feedback loop between users' opinion dynamics and a recommendation system.
We show how reactive policies can help users effectively prevent or restrict undesirable opinion shifts.
arXiv Detail & Related papers (2025-03-14T19:45:57Z) - A Collaborative Content Moderation Framework for Toxicity Detection based on Conformalized Estimates of Annotation Disagreement [7.345136916791223]
We introduce a novel content moderation framework that emphasizes the importance of capturing annotation disagreement.
We leverage uncertainty estimation techniques, specifically Conformal Prediction, to account for both the ambiguity in comment annotations and the model's inherent uncertainty in predicting toxicity and disagreement.
arXiv Detail & Related papers (2024-11-06T18:08:57Z) - Improving Recommendation System Serendipity Through Lexicase Selection [53.57498970940369]
We propose a new serendipity metric to measure the presence of echo chambers and homophily in recommendation systems.
We then attempt to improve the diversity-preservation qualities of well known recommendation techniques by adopting a parent selection algorithm known as lexicase selection.
Our results show that lexicase selection, or a mixture of lexicase selection and ranking, outperforms its purely ranked counterparts in terms of personalization, coverage and our specifically designed serendipity benchmark.
arXiv Detail & Related papers (2023-05-18T15:37:38Z) - The Amplification Paradox in Recommender Systems [12.723777984461693]
We show through simulations that the collaborative-filtering nature of recommender systems and the nicheness of extreme content can resolve the apparent paradox.
Our results call for a nuanced interpretation of algorithmic amplification'' and highlight the importance of modeling the utility of content to users when auditing recommender systems.
arXiv Detail & Related papers (2023-02-22T09:12:48Z) - Recommending to Strategic Users [10.079698681921673]
We show that users strategically choose content to influence the types of content they get recommended in the future.
We propose three interventions that may improve recommendation quality when taking into account strategic consumption.
arXiv Detail & Related papers (2023-02-13T17:57:30Z) - Debiasing Recommendation by Learning Identifiable Latent Confounders [49.16119112336605]
Confounding bias arises due to the presence of unmeasured variables that can affect both a user's exposure and feedback.
Existing methods either (1) make untenable assumptions about these unmeasured variables or (2) directly infer latent confounders from users' exposure.
We propose a novel method, i.e., identifiable deconfounder (iDCF), which leverages a set of proxy variables to resolve the aforementioned non-identification issue.
arXiv Detail & Related papers (2023-02-10T05:10:26Z) - CausPref: Causal Preference Learning for Out-of-Distribution
Recommendation [36.22965012642248]
The current recommender system is still vulnerable to the distribution shift of users and items in realistic scenarios.
We propose to incorporate the recommendation-specific DAG learner into a novel causal preference-based recommendation framework named CausPref.
Our approach surpasses the benchmark models significantly under types of out-of-distribution settings.
arXiv Detail & Related papers (2022-02-08T16:42:03Z) - FEBR: Expert-Based Recommendation Framework for beneficial and
personalized content [77.86290991564829]
We propose FEBR (Expert-Based Recommendation Framework), an apprenticeship learning framework to assess the quality of the recommended content.
The framework exploits the demonstrated trajectories of an expert (assumed to be reliable) in a recommendation evaluation environment, to recover an unknown utility function.
We evaluate the performance of our solution through a user interest simulation environment (using RecSim)
arXiv Detail & Related papers (2021-07-17T18:21:31Z) - Towards Fair Personalization by Avoiding Feedback Loops [3.180077164673223]
Self-reinforcing feedback loops are cause and effect of over and/or under-presentation of some content in interactive recommender systems.
We consider two models that explicitly incorporate, or ignore the systematic and limited exposure to alternatives.
arXiv Detail & Related papers (2020-12-20T19:28:57Z) - Reliable Off-policy Evaluation for Reinforcement Learning [53.486680020852724]
In a sequential decision-making problem, off-policy evaluation estimates the expected cumulative reward of a target policy.
We propose a novel framework that provides robust and optimistic cumulative reward estimates using one or multiple logged data.
arXiv Detail & Related papers (2020-11-08T23:16:19Z) - Fairness-Aware Explainable Recommendation over Knowledge Graphs [73.81994676695346]
We analyze different groups of users according to their level of activity, and find that bias exists in recommendation performance between different groups.
We show that inactive users may be more susceptible to receiving unsatisfactory recommendations, due to insufficient training data for the inactive users.
We propose a fairness constrained approach via re-ranking to mitigate this problem in the context of explainable recommendation over knowledge graphs.
arXiv Detail & Related papers (2020-06-03T05:04:38Z) - Confounding-Robust Policy Evaluation in Infinite-Horizon Reinforcement
Learning [70.01650994156797]
Off- evaluation of sequential decision policies from observational data is necessary in batch reinforcement learning such as education healthcare.
We develop an approach that estimates the bounds of a given policy.
We prove convergence to the sharp bounds as we collect more confounded data.
arXiv Detail & Related papers (2020-02-11T16:18:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.