Securing Visually-Aware Recommender Systems: An Adversarial Image
Reconstruction and Detection Framework
- URL: http://arxiv.org/abs/2306.07992v1
- Date: Sun, 11 Jun 2023 19:59:35 GMT
- Title: Securing Visually-Aware Recommender Systems: An Adversarial Image
Reconstruction and Detection Framework
- Authors: Minglei Yin, Bin Liu, Neil Zhenqiang Gong, Xin Li
- Abstract summary: Visually-aware recommendation systems ( VARS) are vulnerable to item-image adversarial attacks.
In this paper, we propose an adversarial image reconstruction and detection framework to secure VARS.
Our proposed method can simultaneously (1) secure VARS from adversarial attacks characterized by local perturbations by image reconstruction based on global vision transformers; and (2) accurately detect adversarial examples using a novel contrastive learning approach.
- Score: 41.680028677031316
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With rich visual data, such as images, becoming readily associated with
items, visually-aware recommendation systems (VARS) have been widely used in
different applications. Recent studies have shown that VARS are vulnerable to
item-image adversarial attacks, which add human-imperceptible perturbations to
the clean images associated with those items. Attacks on VARS pose new security
challenges to a wide range of applications such as e-Commerce and social
networks where VARS are widely used. How to secure VARS from such adversarial
attacks becomes a critical problem. Currently, there is still a lack of
systematic study on how to design secure defense strategies against visual
attacks on VARS. In this paper, we attempt to fill this gap by proposing an
adversarial image reconstruction and detection framework to secure VARS. Our
proposed method can simultaneously (1) secure VARS from adversarial attacks
characterized by local perturbations by image reconstruction based on global
vision transformers; and (2) accurately detect adversarial examples using a
novel contrastive learning approach. Meanwhile, our framework is designed to be
used as both a filter and a detector so that they can be jointly trained to
improve the flexibility of our defense strategy to a variety of attacks and
VARS models. We have conducted extensive experimental studies with two popular
attack methods (FGSM and PGD). Our experimental results on two real-world
datasets show that our defense strategy against visual attacks is effective and
outperforms existing methods on different attacks. Moreover, our method can
detect adversarial examples with high accuracy.
Related papers
- MirrorCheck: Efficient Adversarial Defense for Vision-Language Models [55.73581212134293]
We propose a novel, yet elegantly simple approach for detecting adversarial samples in Vision-Language Models.
Our method leverages Text-to-Image (T2I) models to generate images based on captions produced by target VLMs.
Empirical evaluations conducted on different datasets validate the efficacy of our approach.
arXiv Detail & Related papers (2024-06-13T15:55:04Z) - Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive
Learning [85.2564206440109]
This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses.
We introduce the emphtoolns attack, which is resistant to backdoor detection and model fine-tuning defenses.
arXiv Detail & Related papers (2023-11-20T02:21:49Z) - Recent improvements of ASR models in the face of adversarial attacks [28.934863462633636]
Speech Recognition models are vulnerable to adversarial attacks.
We show that the relative strengths of different attack algorithms vary considerably when changing the model architecture.
We release our source code as a package that should help future research in evaluating their attacks and defenses.
arXiv Detail & Related papers (2022-03-29T22:40:37Z) - Adversarial Robustness of Deep Reinforcement Learning based Dynamic
Recommender Systems [50.758281304737444]
We propose to explore adversarial examples and attack detection on reinforcement learning-based interactive recommendation systems.
We first craft different types of adversarial examples by adding perturbations to the input and intervening on the casual factors.
Then, we augment recommendation systems by detecting potential attacks with a deep learning-based classifier based on the crafted data.
arXiv Detail & Related papers (2021-12-02T04:12:24Z) - Deep Bayesian Image Set Classification: A Defence Approach against
Adversarial Attacks [32.48820298978333]
Deep neural networks (DNNs) are susceptible to be fooled with nearly high confidence by an adversary.
In practice, the vulnerability of deep learning systems against carefully perturbed images, known as adversarial examples, poses a dire security threat in the physical world applications.
We propose a robust deep Bayesian image set classification as a defence framework against a broad range of adversarial attacks.
arXiv Detail & Related papers (2021-08-23T14:52:44Z) - Detection Defense Against Adversarial Attacks with Saliency Map [7.736844355705379]
It is well established that neural networks are vulnerable to adversarial examples, which are almost imperceptible on human vision.
Existing defenses are trend to harden the robustness of models against adversarial attacks.
We propose a novel method combined with additional noises and utilize the inconsistency strategy to detect adversarial examples.
arXiv Detail & Related papers (2020-09-06T13:57:17Z) - AdvFoolGen: Creating Persistent Troubles for Deep Classifiers [17.709146615433458]
We present a new black-box attack termed AdvFoolGen, which can generate attacking images from the same feature space as that of the natural images.
We demonstrate the effectiveness and robustness of our attack in the face of state-of-the-art defense techniques.
arXiv Detail & Related papers (2020-07-20T21:27:41Z) - Adversarial Attacks and Detection on Reinforcement Learning-Based
Interactive Recommender Systems [47.70973322193384]
Adversarial attacks pose significant challenges for detecting them at an early stage.
We propose attack-agnostic detection on reinforcement learning-based interactive recommendation systems.
We first craft adversarial examples to show their diverse distributions and then augment recommendation systems by detecting potential attacks.
arXiv Detail & Related papers (2020-06-14T15:41:47Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.