A survey on Adversarial Recommender Systems: from Attack/Defense
strategies to Generative Adversarial Networks
- URL: http://arxiv.org/abs/2005.10322v2
- Date: Tue, 10 Nov 2020 10:48:34 GMT
- Title: A survey on Adversarial Recommender Systems: from Attack/Defense
strategies to Generative Adversarial Networks
- Authors: Yashar Deldjoo and Tommaso Di Noia and Felice Antonio Merra
- Abstract summary: Latent-factor models (LFM) based on collaborative filtering (CF) are widely used in recommender systems (RS)
Many applications of machine learning (ML) are adversarial in nature.
- Score: 17.48549434869898
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Latent-factor models (LFM) based on collaborative filtering (CF), such as
matrix factorization (MF) and deep CF methods, are widely used in modern
recommender systems (RS) due to their excellent performance and recommendation
accuracy. However, success has been accompanied with a major new arising
challenge: many applications of machine learning (ML) are adversarial in
nature. In recent years, it has been shown that these methods are vulnerable to
adversarial examples, i.e., subtle but non-random perturbations designed to
force recommendation models to produce erroneous outputs.
The goal of this survey is two-fold: (i) to present recent advances on
adversarial machine learning (AML) for the security of RS (i.e., attacking and
defense recommendation models), (ii) to show another successful application of
AML in generative adversarial networks (GANs) for generative applications,
thanks to their ability for learning (high-dimensional) data distributions. In
this survey, we provide an exhaustive literature review of 74 articles
published in major RS and ML journals and conferences. This review serves as a
reference for the RS community, working on the security of RS or on generative
models using GANs to improve their quality.
Related papers
- Benchmarking Uncertainty Quantification Methods for Large Language Models with LM-Polygraph [85.51252685938564]
Uncertainty quantification (UQ) is becoming increasingly recognized as a critical component of applications that rely on machine learning (ML)
As with other ML models, large language models (LLMs) are prone to make incorrect predictions, hallucinate'' by fabricating claims, or simply generate low-quality output for a given input.
We introduce a novel benchmark that implements a collection of state-of-the-art UQ baselines, and provides an environment for controllable and consistent evaluation of novel techniques.
arXiv Detail & Related papers (2024-06-21T20:06:31Z) - RAGSys: Item-Cold-Start Recommender as RAG System [0.0]
Large Language Models (LLM) hold immense promise for real-world applications, but their generic knowledge often falls short of domain-specific needs.
In-Context Learning (ICL) offers an alternative, which can leverage Retrieval-Augmented Generation (RAG) to provide LLMs with relevant demonstrations for few-shot learning tasks.
We argue that ICL retrieval in this context resembles item-cold-start recommender systems, prioritizing discovery and maximizing information gain over strict relevance.
arXiv Detail & Related papers (2024-05-27T18:40:49Z) - A Review of Modern Recommender Systems Using Generative Models (Gen-RecSys) [57.30228361181045]
This survey connects key advancements in recommender systems using Generative Models (Gen-RecSys)
It covers: interaction-driven generative models; the use of large language models (LLM) and textual data for natural language recommendation; and the integration of multimodal models for generating and processing images/videos in RS.
Our work highlights necessary paradigms for evaluating the impact and harm of Gen-RecSys and identifies open challenges.
arXiv Detail & Related papers (2024-03-31T06:57:57Z) - Review-Incorporated Model-Agnostic Profile Injection Attacks on
Recommender Systems [24.60223863559958]
We propose a novel attack framework named R-Trojan, which formulates the attack objectives as an optimization problem and adopts a tailored transformer-based generative adversarial network (GAN) to solve it.
Experiments on real-world datasets demonstrate that R-Trojan greatly outperforms state-of-the-art attack methods on various victim RSs under black-box settings.
arXiv Detail & Related papers (2024-02-14T08:56:41Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - Towards Evaluating Transfer-based Attacks Systematically, Practically,
and Fairly [79.07074710460012]
adversarial vulnerability of deep neural networks (DNNs) has drawn great attention.
An increasing number of transfer-based methods have been developed to fool black-box DNN models.
We establish a transfer-based attack benchmark (TA-Bench) which implements 30+ methods.
arXiv Detail & Related papers (2023-11-02T15:35:58Z) - Enhancing ML-Based DoS Attack Detection Through Combinatorial Fusion
Analysis [2.7973964073307265]
Mitigating Denial-of-Service (DoS) attacks is vital for online service security and availability.
We suggest an innovative method, fusion, which combines multiple ML models using advanced algorithms.
Our findings emphasize the potential of this approach to improve DoS attack detection and contribute to stronger defense mechanisms.
arXiv Detail & Related papers (2023-10-02T02:21:48Z) - Vulnerability of Machine Learning Approaches Applied in IoT-based Smart Grid: A Review [51.31851488650698]
Machine learning (ML) sees an increasing prevalence of being used in the internet-of-things (IoT)-based smart grid.
adversarial distortion injected into the power signal will greatly affect the system's normal control and operation.
It is imperative to conduct vulnerability assessment for MLsgAPPs applied in the context of safety-critical power systems.
arXiv Detail & Related papers (2023-08-30T03:29:26Z) - Machine Learning Security against Data Poisoning: Are We There Yet? [23.809841593870757]
This article reviews data poisoning attacks that compromise the training data used to learn machine learning models.
We discuss how to mitigate these attacks using basic security principles, or by deploying ML-oriented defensive mechanisms.
arXiv Detail & Related papers (2022-04-12T17:52:09Z) - Supervised Advantage Actor-Critic for Recommender Systems [76.7066594130961]
We propose negative sampling strategy for training the RL component and combine it with supervised sequential learning.
Based on sampled (negative) actions (items), we can calculate the "advantage" of a positive action over the average case.
We instantiate SNQN and SA2C with four state-of-the-art sequential recommendation models and conduct experiments on two real-world datasets.
arXiv Detail & Related papers (2021-11-05T12:51:15Z) - ConAML: Constrained Adversarial Machine Learning for Cyber-Physical
Systems [7.351477761427584]
We study the potential vulnerabilities of machine learning applied in cyber-physical systems.
We propose Constrained Adversarial Machine Learning (ConAML) which generates adversarial examples that satisfy the intrinsic constraints of the physical systems.
arXiv Detail & Related papers (2020-03-12T05:59:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.