A two-level solution to fight against dishonest opinions in
recommendation-based trust systems
- URL: http://arxiv.org/abs/2006.04803v1
- Date: Tue, 9 Jun 2020 00:34:11 GMT
- Title: A two-level solution to fight against dishonest opinions in
recommendation-based trust systems
- Authors: Omar Abdel Wahab, Jamal Bentahar, Robin Cohen, Hadi Otrok, Azzam
Mourad
- Abstract summary: We consider a scenario in which an agent requests recommendations from multiple parties to build trust toward another agent.
At the collection level, we propose to allow agents to self-assess the accuracy of their recommendations.
At the processing level, we propose a recommendations aggregation technique that is resilient to collusion attacks.
- Score: 13.356755375091456
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a mechanism to deal with dishonest opinions in
recommendation-based trust models, at both the collection and processing
levels. We consider a scenario in which an agent requests recommendations from
multiple parties to build trust toward another agent. At the collection level,
we propose to allow agents to self-assess the accuracy of their recommendations
and autonomously decide on whether they would participate in the recommendation
process or not. At the processing level, we propose a recommendations
aggregation technique that is resilient to collusion attacks, followed by a
credibility update mechanism for the participating agents. The originality of
our work stems from its consideration of dishonest opinions at both the
collection and processing levels, which allows for better and more persistent
protection against dishonest recommenders. Experiments conducted on the
Epinions dataset show that our solution yields better performance in protecting
the recommendation process against Sybil attacks, in comparison with a
competing model that derives the optimal network of advisors based on the
agents' trust values.
Related papers
- FLOW: A Feedback LOop FrameWork for Simultaneously Enhancing Recommendation and User Agents [28.25107058257086]
We propose a novel framework named FLOW, which achieves collaboration between the recommendation agent and the user agent by introducing a feedback loop.
Specifically, the recommendation agent refines its understanding of the user's preferences by analyzing the user agent's feedback on previously suggested items.
This iterative refinement process enhances the reasoning capabilities of both the recommendation agent and the user agent, enabling more precise recommendations.
arXiv Detail & Related papers (2024-10-26T00:51:39Z) - A Unified Causal Framework for Auditing Recommender Systems for Ethical Concerns [40.793466500324904]
We view recommender system auditing from a causal lens and provide a general recipe for defining auditing metrics.
Under this general causal auditing framework, we categorize existing auditing metrics and identify gaps in them.
We propose two classes of such metrics:future- and past-reacheability and stability, that measure the ability of a user to influence their own and other users' recommendations.
arXiv Detail & Related papers (2024-09-20T04:37:36Z) - Revisiting Reciprocal Recommender Systems: Metrics, Formulation, and Method [60.364834418531366]
We propose five new evaluation metrics that comprehensively and accurately assess the performance of RRS.
We formulate the RRS from a causal perspective, formulating recommendations as bilateral interventions.
We introduce a reranking strategy to maximize matching outcomes, as measured by the proposed metrics.
arXiv Detail & Related papers (2024-08-19T07:21:02Z) - Pure Exploration under Mediators' Feedback [63.56002444692792]
Multi-armed bandits are a sequential-decision-making framework, where, at each interaction step, the learner selects an arm and observes a reward.
We consider the scenario in which the learner has access to a set of mediators, each of which selects the arms on the agent's behalf according to a and possibly unknown policy.
We propose a sequential decision-making strategy for discovering the best arm under the assumption that the mediators' policies are known to the learner.
arXiv Detail & Related papers (2023-08-29T18:18:21Z) - A Survey on Fairness-aware Recommender Systems [59.23208133653637]
We present concepts of fairness in different recommendation scenarios, comprehensively categorize current advances, and introduce typical methods to promote fairness in different stages of recommender systems.
Next, we delve into the significant influence that fairness-aware recommender systems exert on real-world industrial applications.
arXiv Detail & Related papers (2023-06-01T07:08:22Z) - Recommendation Systems with Distribution-Free Reliability Guarantees [83.80644194980042]
We show how to return a set of items rigorously guaranteed to contain mostly good items.
Our procedure endows any ranking model with rigorous finite-sample control of the false discovery rate.
We evaluate our methods on the Yahoo! Learning to Rank and MSMarco datasets.
arXiv Detail & Related papers (2022-07-04T17:49:25Z) - User Tampering in Reinforcement Learning Recommender Systems [2.28438857884398]
We highlight a unique safety concern prevalent in reinforcement learning (RL)-based recommendation algorithms -- 'user tampering'
User tampering is a situation where an RL-based recommender system may manipulate a media user's opinions through its suggestions as part of a policy to maximize long-term user engagement.
arXiv Detail & Related papers (2021-09-09T07:53:23Z) - Recommendation Fairness: From Static to Dynamic [12.080824433982993]
We discuss how fairness could be baked into reinforcement learning techniques for recommendation.
We argue that in order to make further progress in recommendation fairness, we may want to consider multi-agent (game-theoretic) optimization, multi-objective (Pareto) optimization.
arXiv Detail & Related papers (2021-09-05T21:38:05Z) - Peer Selection with Noisy Assessments [43.307040330622186]
We extend PeerNomination, the most accurate peer reviewing algorithm to date, into WeightedPeerNomination.
We show analytically that a weighting scheme can improve the overall accuracy of the selection significantly.
arXiv Detail & Related papers (2021-07-21T14:47:11Z) - Self-Supervised Reinforcement Learning for Recommender Systems [77.38665506495553]
We propose self-supervised reinforcement learning for sequential recommendation tasks.
Our approach augments standard recommendation models with two output layers: one for self-supervised learning and the other for RL.
Based on such an approach, we propose two frameworks namely Self-Supervised Q-learning(SQN) and Self-Supervised Actor-Critic(SAC)
arXiv Detail & Related papers (2020-06-10T11:18:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.