Shilling Black-box Review-based Recommender Systems through Fake Review
Generation
- URL: http://arxiv.org/abs/2306.16526v1
- Date: Tue, 27 Jun 2023 12:32:36 GMT
- Title: Shilling Black-box Review-based Recommender Systems through Fake Review
Generation
- Authors: Hung-Yun Chiang, Yi-Syuan Chen, Yun-Zhu Song, Hong-Han Shuai and Jason
S. Chang
- Abstract summary: Review-Based Recommender Systems (RBRS) have attracted increasing research interest due to their ability to alleviate well-known cold-start problems.
We argue that such a reliance on reviews may instead expose systems to the risk of being shilled.
We propose the first generation-based model for shilling attacks against RBRSs.
- Score: 20.162253355141893
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Review-Based Recommender Systems (RBRS) have attracted increasing research
interest due to their ability to alleviate well-known cold-start problems. RBRS
utilizes reviews to construct the user and items representations. However, in
this paper, we argue that such a reliance on reviews may instead expose systems
to the risk of being shilled. To explore this possibility, in this paper, we
propose the first generation-based model for shilling attacks against RBRSs.
Specifically, we learn a fake review generator through reinforcement learning,
which maliciously promotes items by forcing prediction shifts after adding
generated reviews to the system. By introducing the auxiliary rewards to
increase text fluency and diversity with the aid of pre-trained language models
and aspect predictors, the generated reviews can be effective for shilling with
high fidelity. Experimental results demonstrate that the proposed framework can
successfully attack three different kinds of RBRSs on the Amazon corpus with
three domains and Yelp corpus. Furthermore, human studies also show that the
generated reviews are fluent and informative. Finally, equipped with Attack
Review Generators (ARGs), RBRSs with adversarial training are much more robust
to malicious reviews.
Related papers
- Forging the Forger: An Attempt to Improve Authorship Verification via Data Augmentation [52.72682366640554]
Authorship Verification (AV) is a text classification task concerned with inferring whether a candidate text has been written by one specific author or by someone else.
It has been shown that many AV systems are vulnerable to adversarial attacks, where a malicious author actively tries to fool the classifier by either concealing their writing style, or by imitating the style of another author.
arXiv Detail & Related papers (2024-03-17T16:36:26Z) - Verifying the Robustness of Automatic Credibility Assessment [79.08422736721764]
Text classification methods have been widely investigated as a way to detect content of low credibility.
In some cases insignificant changes in input text can mislead the models.
We introduce BODEGA: a benchmark for testing both victim models and attack methods on misinformation detection tasks.
arXiv Detail & Related papers (2023-03-14T16:11:47Z) - Combat AI With AI: Counteract Machine-Generated Fake Restaurant Reviews
on Social Media [77.34726150561087]
We propose to leverage the high-quality elite Yelp reviews to generate fake reviews from the OpenAI GPT review creator.
We apply the model to predict non-elite reviews and identify the patterns across several dimensions.
We show that social media platforms are continuously challenged by machine-generated fake reviews.
arXiv Detail & Related papers (2023-02-10T19:40:10Z) - Mitigating Human and Computer Opinion Fraud via Contrastive Learning [0.0]
We introduce the novel approach towards fake text reviews detection in collaborative filtering recommender systems.
The existing algorithms concentrate on detecting the fake reviews, generated by language models and ignore the texts, written by dishonest users.
We propose the contrastive learning-based architecture, which utilizes the user demographic characteristics, along with the text reviews, as the additional evidence against fakes.
arXiv Detail & Related papers (2023-01-08T12:02:28Z) - On Faithfulness and Coherence of Language Explanations for
Recommendation Systems [8.143715142450876]
This work probes state-of-the-art models and their review generation component.
We show that the generated explanations are brittle and need further evaluation before being taken as literal rationales for the estimated ratings.
arXiv Detail & Related papers (2022-09-12T17:00:31Z) - Factual and Informative Review Generation for Explainable Recommendation [41.403493319602816]
Previous models' generated content often contain factual hallucinations.
Inspired by recent success in using retrieved content in addition to parametric knowledge for generation, we propose to augment the generator with a personalized retriever.
Experiments on Yelp, TripAdvisor, and Amazon Movie Reviews dataset show our model could generate explanations that more reliably entail existing reviews, are more diverse, and are rated more informative by human evaluators.
arXiv Detail & Related papers (2022-09-12T16:46:47Z) - Learning Opinion Summarizers by Selecting Informative Reviews [81.47506952645564]
We collect a large dataset of summaries paired with user reviews for over 31,000 products, enabling supervised training.
The content of many reviews is not reflected in the human-written summaries, and, thus, the summarizer trained on random review subsets hallucinates.
We formulate the task as jointly learning to select informative subsets of reviews and summarizing the opinions expressed in these subsets.
arXiv Detail & Related papers (2021-09-09T15:01:43Z) - Ready for Emerging Threats to Recommender Systems? A Graph
Convolution-based Generative Shilling Attack [8.591490818966882]
Primitive attacks are highly feasible but less effective due to simplistic handcrafted rules.
upgraded attacks are more powerful but costly and difficult to deploy because they require more knowledge from recommendations.
In this paper, we explore a novel shilling attack called Graph cOnvolution-based generative shilling ATtack (GOAT) to balance the attacks' feasibility and effectiveness.
arXiv Detail & Related papers (2021-07-22T05:02:59Z) - Can We Automate Scientific Reviewing? [89.50052670307434]
We discuss the possibility of using state-of-the-art natural language processing (NLP) models to generate first-pass peer reviews for scientific papers.
We collect a dataset of papers in the machine learning domain, annotate them with different aspects of content covered in each review, and train targeted summarization models that take in papers to generate reviews.
Comprehensive experimental results show that system-generated reviews tend to touch upon more aspects of the paper than human-written reviews.
arXiv Detail & Related papers (2021-01-30T07:16:53Z) - Automating App Review Response Generation [67.58267006314415]
We propose a novel approach RRGen that automatically generates review responses by learning knowledge relations between reviews and their responses.
Experiments on 58 apps and 309,246 review-response pairs highlight that RRGen outperforms the baselines by at least 67.4% in terms of BLEU-4.
arXiv Detail & Related papers (2020-02-10T05:23:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.