BHEISR: Nudging from Bias to Balance -- Promoting Belief Harmony by
Eliminating Ideological Segregation in Knowledge-based Recommendations
- URL: http://arxiv.org/abs/2307.02797v1
- Date: Thu, 6 Jul 2023 06:12:37 GMT
- Title: BHEISR: Nudging from Bias to Balance -- Promoting Belief Harmony by
Eliminating Ideological Segregation in Knowledge-based Recommendations
- Authors: Mengyan Wang, Yuxuan Hu, Zihan Yuan, Chenting Jiang, Weihua Li,
Shiqing Wu and Quan Bai
- Abstract summary: The main objective is to strike a belief balance for users while minimizing the detrimental influence caused by filter bubbles.
The BHEISR model amalgamates principles from nudge theory while upholding democratic and transparent principles.
- Score: 5.795636579831129
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In the realm of personalized recommendation systems, the increasing concern
is the amplification of belief imbalance and user biases, a phenomenon
primarily attributed to the filter bubble. Addressing this critical issue, we
introduce an innovative intermediate agency (BHEISR) between users and existing
recommendation systems to attenuate the negative repercussions of the filter
bubble effect in extant recommendation systems. The main objective is to strike
a belief balance for users while minimizing the detrimental influence caused by
filter bubbles. The BHEISR model amalgamates principles from nudge theory while
upholding democratic and transparent principles. It harnesses user-specific
category information to stimulate curiosity, even in areas users might
initially deem uninteresting. By progressively stimulating interest in novel
categories, the model encourages users to broaden their belief horizons and
explore the information they typically overlook. Our model is time-sensitive
and operates on a user feedback loop. It utilizes the existing recommendation
algorithm of the model and incorporates user feedback from the prior time
frame. This approach endeavors to transcend the constraints of the filter
bubble, enrich recommendation diversity, and strike a belief balance among
users while also catering to user preferences and system-specific business
requirements. To validate the effectiveness and reliability of the BHEISR
model, we conducted a series of comprehensive experiments with real-world
datasets. These experiments compared the performance of the BHEISR model
against several baseline models using nearly 200 filter bubble-impacted users
as test subjects. Our experimental results conclusively illustrate the superior
performance of the BHEISR model in mitigating filter bubbles and balancing user
perspectives.
Related papers
- Source Echo Chamber: Exploring the Escalation of Source Bias in User, Data, and Recommender System Feedback Loop [65.23044868332693]
We investigate the impact of source bias on the realm of recommender systems.
We show the prevalence of source bias and reveal a potential digital echo chamber with source bias amplification.
We introduce a black-box debiasing method that maintains model impartiality towards both HGC and AIGC.
arXiv Detail & Related papers (2024-05-28T09:34:50Z) - GPTBIAS: A Comprehensive Framework for Evaluating Bias in Large Language
Models [83.30078426829627]
Large language models (LLMs) have gained popularity and are being widely adopted by a large user community.
The existing evaluation methods have many constraints, and their results exhibit a limited degree of interpretability.
We propose a bias evaluation framework named GPTBIAS that leverages the high performance of LLMs to assess bias in models.
arXiv Detail & Related papers (2023-12-11T12:02:14Z) - DPR: An Algorithm Mitigate Bias Accumulation in Recommendation feedback
loops [41.21024436158042]
We study the negative impact of feedback loops and unknown exposure mechanisms on recommendation quality and user experience.
We propose Dynamic Personalized Ranking (textbfDPR), an unbiased algorithm that uses dynamic re-weighting to mitigate the cross-effects.
We show theoretically that our approach mitigates the negative effects of feedback loops and unknown exposure mechanisms.
arXiv Detail & Related papers (2023-11-10T04:36:00Z) - Learning from Negative User Feedback and Measuring Responsiveness for
Sequential Recommenders [13.762960304406016]
We introduce explicit and implicit negative user feedback into the training objective of sequential recommenders.
We demonstrate the effectiveness of this approach using live experiments on a large-scale industrial recommender system.
arXiv Detail & Related papers (2023-08-23T17:16:07Z) - Bilateral Self-unbiased Learning from Biased Implicit Feedback [10.690479112143658]
We propose a novel unbiased recommender learning model, namely BIlateral SElf-unbiased Recommender (BISER)
BISER consists of two key components: (i) self-inverse propensity weighting (SIPW) to gradually mitigate the bias of items without incurring high computational costs; and (ii) bilateral unbiased learning (BU) to bridge the gap between two complementary models in model predictions.
Extensive experiments show that BISER consistently outperforms state-of-the-art unbiased recommender models over several datasets.
arXiv Detail & Related papers (2022-07-26T05:17:42Z) - Cross Pairwise Ranking for Unbiased Item Recommendation [57.71258289870123]
We develop a new learning paradigm named Cross Pairwise Ranking (CPR)
CPR achieves unbiased recommendation without knowing the exposure mechanism.
We prove in theory that this way offsets the influence of user/item propensity on the learning.
arXiv Detail & Related papers (2022-04-26T09:20:27Z) - Deep Causal Reasoning for Recommendations [47.83224399498504]
A new trend in recommender system research is to negate the influence of confounders from a causal perspective.
We model the recommendation as a multi-cause multi-outcome (MCMO) inference problem.
We show that MCMO modeling may lead to high variance due to scarce observations associated with the high-dimensional causal space.
arXiv Detail & Related papers (2022-01-06T15:00:01Z) - PURS: Personalized Unexpected Recommender System for Improving User
Satisfaction [76.98616102965023]
We describe a novel Personalized Unexpected Recommender System (PURS) model that incorporates unexpectedness into the recommendation process.
Extensive offline experiments on three real-world datasets illustrate that the proposed PURS model significantly outperforms the state-of-the-art baseline approaches.
arXiv Detail & Related papers (2021-06-05T01:33:21Z) - Adversarial Filters of Dataset Biases [96.090959788952]
Large neural models have demonstrated human-level performance on language and vision benchmarks.
Their performance degrades considerably on adversarial or out-of-distribution samples.
We propose AFLite, which adversarially filters such dataset biases.
arXiv Detail & Related papers (2020-02-10T21:59:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.