Toward Robust Recommendation via Real-time Vicinal Defense
- URL: http://arxiv.org/abs/2309.17278v1
- Date: Fri, 29 Sep 2023 14:30:05 GMT
- Title: Toward Robust Recommendation via Real-time Vicinal Defense
- Authors: Yichang Xu, Chenwang Wu and Defu Lian
- Abstract summary: We propose a general method, Real-time Vicinal Defense (RVD), which leverages neighboring training data to fine-tune the model before making a recommendation for each user.
RVD effectively mitigates targeted poisoning attacks across various models without sacrificing accuracy.
- Score: 32.69838472574848
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recommender systems have been shown to be vulnerable to poisoning attacks,
where malicious data is injected into the dataset to cause the recommender
system to provide biased recommendations. To defend against such attacks,
various robust learning methods have been proposed. However, most methods are
model-specific or attack-specific, making them lack generality, while other
methods, such as adversarial training, are oriented towards evasion attacks and
thus have a weak defense strength in poisoning attacks.
In this paper, we propose a general method, Real-time Vicinal Defense (RVD),
which leverages neighboring training data to fine-tune the model before making
a recommendation for each user. RVD works in the inference phase to ensure the
robustness of the specific sample in real-time, so there is no need to change
the model structure and training process, making it more practical. Extensive
experimental results demonstrate that RVD effectively mitigates targeted
poisoning attacks across various models without sacrificing accuracy. Moreover,
the defensive effect can be further amplified when our method is combined with
other strategies.
Related papers
- Optimal Zero-Shot Detector for Multi-Armed Attacks [30.906457338347447]
This paper explores a scenario in which a malicious actor employs a multi-armed attack strategy to manipulate data samples.
Our central objective is to protect the data by detecting any alterations to the input.
We derive an innovative information-theoretic defense approach that optimally aggregates the decisions made by these detectors.
arXiv Detail & Related papers (2024-02-24T13:08:39Z) - Unsupervised Adversarial Detection without Extra Model: Training Loss
Should Change [24.76524262635603]
Traditional approaches to adversarial training and supervised detection rely on prior knowledge of attack types and access to labeled training data.
We propose new training losses to reduce useless features and the corresponding detection method without prior knowledge of adversarial attacks.
The proposed method works well in all tested attack types and the false positive rates are even better than the methods good at certain types.
arXiv Detail & Related papers (2023-08-07T01:41:21Z) - AdvDiff: Generating Unrestricted Adversarial Examples using Diffusion Models [7.406040859734522]
Unrestricted adversarial attacks present a serious threat to deep learning models and adversarial defense techniques.
Previous attack methods often directly inject Projected Gradient Descent (PGD) gradients into the sampling of generative models.
We propose a new method, called AdvDiff, to generate unrestricted adversarial examples with diffusion models.
arXiv Detail & Related papers (2023-07-24T03:10:02Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - MIXPGD: Hybrid Adversarial Training for Speech Recognition Systems [18.01556863687433]
We propose mixPGD adversarial training method to improve robustness of the model for ASR systems.
In standard adversarial training, adversarial samples are generated by leveraging supervised or unsupervised methods.
We merge the capabilities of both supervised and unsupervised approaches in our method to generate new adversarial samples which aid in improving model robustness.
arXiv Detail & Related papers (2023-03-10T07:52:28Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - How Robust are Randomized Smoothing based Defenses to Data Poisoning? [66.80663779176979]
We present a previously unrecognized threat to robust machine learning models that highlights the importance of training-data quality.
We propose a novel bilevel optimization-based data poisoning attack that degrades the robustness guarantees of certifiably robust classifiers.
Our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods.
arXiv Detail & Related papers (2020-12-02T15:30:21Z) - Guided Adversarial Attack for Evaluating and Enhancing Adversarial
Defenses [59.58128343334556]
We introduce a relaxation term to the standard loss, that finds more suitable gradient-directions, increases attack efficacy and leads to more efficient adversarial training.
We propose Guided Adversarial Margin Attack (GAMA), which utilizes function mapping of the clean image to guide the generation of adversaries.
We also propose Guided Adversarial Training (GAT), which achieves state-of-the-art performance amongst single-step defenses.
arXiv Detail & Related papers (2020-11-30T16:39:39Z) - Adversarial Distributional Training for Robust Deep Learning [53.300984501078126]
Adversarial training (AT) is among the most effective techniques to improve model robustness by augmenting training data with adversarial examples.
Most existing AT methods adopt a specific attack to craft adversarial examples, leading to the unreliable robustness against other unseen attacks.
In this paper, we introduce adversarial distributional training (ADT), a novel framework for learning robust models.
arXiv Detail & Related papers (2020-02-14T12:36:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.