A survey on Adversarial Recommender Systems: from Attack/Defense
strategies to Generative Adversarial Networks
- URL: http://arxiv.org/abs/2005.10322v2
- Date: Tue, 10 Nov 2020 10:48:34 GMT
- Title: A survey on Adversarial Recommender Systems: from Attack/Defense
strategies to Generative Adversarial Networks
- Authors: Yashar Deldjoo and Tommaso Di Noia and Felice Antonio Merra
- Abstract summary: Latent-factor models (LFM) based on collaborative filtering (CF) are widely used in recommender systems (RS)
Many applications of machine learning (ML) are adversarial in nature.
- Score: 17.48549434869898
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Latent-factor models (LFM) based on collaborative filtering (CF), such as
matrix factorization (MF) and deep CF methods, are widely used in modern
recommender systems (RS) due to their excellent performance and recommendation
accuracy. However, success has been accompanied with a major new arising
challenge: many applications of machine learning (ML) are adversarial in
nature. In recent years, it has been shown that these methods are vulnerable to
adversarial examples, i.e., subtle but non-random perturbations designed to
force recommendation models to produce erroneous outputs.
The goal of this survey is two-fold: (i) to present recent advances on
adversarial machine learning (AML) for the security of RS (i.e., attacking and
defense recommendation models), (ii) to show another successful application of
AML in generative adversarial networks (GANs) for generative applications,
thanks to their ability for learning (high-dimensional) data distributions. In
this survey, we provide an exhaustive literature review of 74 articles
published in major RS and ML journals and conferences. This review serves as a
reference for the RS community, working on the security of RS or on generative
models using GANs to improve their quality.
Related papers
- Large Language Model Empowered Embedding Generator for Sequential Recommendation [57.49045064294086]
Large Language Model (LLM) has the potential to understand the semantic connections between items, regardless of their popularity.
We present LLMEmb, an innovative technique that harnesses LLM to create item embeddings that bolster the performance of Sequential Recommender Systems.
arXiv Detail & Related papers (2024-09-30T03:59:06Z) - RAGEval: Scenario Specific RAG Evaluation Dataset Generation Framework [69.4501863547618]
This paper introduces RAGEval, a framework designed to assess RAG systems across diverse scenarios.
With a focus on factual accuracy, we propose three novel metrics Completeness, Hallucination, and Irrelevance.
Experimental results show that RAGEval outperforms zero-shot and one-shot methods in terms of clarity, safety, conformity, and richness of generated samples.
arXiv Detail & Related papers (2024-08-02T13:35:11Z) - Review-Incorporated Model-Agnostic Profile Injection Attacks on
Recommender Systems [24.60223863559958]
We propose a novel attack framework named R-Trojan, which formulates the attack objectives as an optimization problem and adopts a tailored transformer-based generative adversarial network (GAN) to solve it.
Experiments on real-world datasets demonstrate that R-Trojan greatly outperforms state-of-the-art attack methods on various victim RSs under black-box settings.
arXiv Detail & Related papers (2024-02-14T08:56:41Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - Towards Evaluating Transfer-based Attacks Systematically, Practically,
and Fairly [79.07074710460012]
adversarial vulnerability of deep neural networks (DNNs) has drawn great attention.
An increasing number of transfer-based methods have been developed to fool black-box DNN models.
We establish a transfer-based attack benchmark (TA-Bench) which implements 30+ methods.
arXiv Detail & Related papers (2023-11-02T15:35:58Z) - Enhancing ML-Based DoS Attack Detection Through Combinatorial Fusion
Analysis [2.7973964073307265]
Mitigating Denial-of-Service (DoS) attacks is vital for online service security and availability.
We suggest an innovative method, fusion, which combines multiple ML models using advanced algorithms.
Our findings emphasize the potential of this approach to improve DoS attack detection and contribute to stronger defense mechanisms.
arXiv Detail & Related papers (2023-10-02T02:21:48Z) - Vulnerability of Machine Learning Approaches Applied in IoT-based Smart Grid: A Review [51.31851488650698]
Machine learning (ML) sees an increasing prevalence of being used in the internet-of-things (IoT)-based smart grid.
adversarial distortion injected into the power signal will greatly affect the system's normal control and operation.
It is imperative to conduct vulnerability assessment for MLsgAPPs applied in the context of safety-critical power systems.
arXiv Detail & Related papers (2023-08-30T03:29:26Z) - Machine Learning Security against Data Poisoning: Are We There Yet? [23.809841593870757]
This article reviews data poisoning attacks that compromise the training data used to learn machine learning models.
We discuss how to mitigate these attacks using basic security principles, or by deploying ML-oriented defensive mechanisms.
arXiv Detail & Related papers (2022-04-12T17:52:09Z) - Supervised Advantage Actor-Critic for Recommender Systems [76.7066594130961]
We propose negative sampling strategy for training the RL component and combine it with supervised sequential learning.
Based on sampled (negative) actions (items), we can calculate the "advantage" of a positive action over the average case.
We instantiate SNQN and SA2C with four state-of-the-art sequential recommendation models and conduct experiments on two real-world datasets.
arXiv Detail & Related papers (2021-11-05T12:51:15Z) - ConAML: Constrained Adversarial Machine Learning for Cyber-Physical
Systems [7.351477761427584]
We study the potential vulnerabilities of machine learning applied in cyber-physical systems.
We propose Constrained Adversarial Machine Learning (ConAML) which generates adversarial examples that satisfy the intrinsic constraints of the physical systems.
arXiv Detail & Related papers (2020-03-12T05:59:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.