Towards Robust Recommender Systems via Triple Cooperative Defense
- URL: http://arxiv.org/abs/2210.13762v1
- Date: Tue, 25 Oct 2022 04:45:43 GMT
- Title: Towards Robust Recommender Systems via Triple Cooperative Defense
- Authors: Qingyang Wang, Defu Lian, Chenwang Wu, and Enhong Chen
- Abstract summary: Recommender systems are often susceptible to well-crafted fake profiles, leading to biased recommendations.
We propose a general framework, Triple Cooperative Defense, which cooperates to improve model robustness through the co-training of three models.
Results show that the robustness improvement of TCD significantly outperforms baselines.
- Score: 63.64651805384898
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recommender systems are often susceptible to well-crafted fake profiles,
leading to biased recommendations. The wide application of recommender systems
makes studying the defense against attack necessary. Among existing defense
methods, data-processing-based methods inevitably exclude normal samples, while
model-based methods struggle to enjoy both generalization and robustness.
Considering the above limitations, we suggest integrating data processing and
robust model and propose a general framework, Triple Cooperative Defense (TCD),
which cooperates to improve model robustness through the co-training of three
models. Specifically, in each round of training, we sequentially use the
high-confidence prediction ratings (consistent ratings) of any two models as
auxiliary training data for the remaining model, and the three models
cooperatively improve recommendation robustness. Notably, TCD adds pseudo label
data instead of deleting abnormal data, which avoids the cleaning of normal
data, and the cooperative training of the three models is also beneficial to
model generalization. Through extensive experiments with five poisoning attacks
on three real-world datasets, the results show that the robustness improvement
of TCD significantly outperforms baselines. It is worth mentioning that TCD is
also beneficial for model generalizations.
Related papers
- Few-shot Model Extraction Attacks against Sequential Recommender Systems [2.372285091200233]
This study introduces a novel few-shot model extraction framework against sequential recommenders.
It is designed to construct a superior surrogate model with the utilization of few-shot data.
Experiments on three datasets show that the proposed few-shot model extraction framework yields superior surrogate models.
arXiv Detail & Related papers (2024-11-18T15:57:14Z) - Adversarial Robustness of Distilled and Pruned Deep Learning-based Wireless Classifiers [0.8348593305367524]
Deep learning techniques for automatic modulation classification (AMC) of wireless signals are vulnerable to adversarial attacks.
This poses a severe security threat to the DL-based wireless systems, specifically for edge applications of AMC.
We address the joint problem of developing optimized DL models that are also robust against adversarial attacks.
arXiv Detail & Related papers (2024-04-11T06:15:01Z) - Securing Recommender System via Cooperative Training [78.97620275467733]
We propose a general framework, Triple Cooperative Defense (TCD), which employs three cooperative models that mutually enhance data.
Considering existing attacks struggle to balance bi-level optimization and efficiency, we revisit poisoning attacks in recommender systems.
We put forth a Game-based Co-training Attack (GCoAttack), which frames the proposed CoAttack and TCD as a game-theoretic process.
arXiv Detail & Related papers (2024-01-23T12:07:20Z) - Training-based Model Refinement and Representation Disagreement for
Semi-Supervised Object Detection [8.096382537967637]
Semi-supervised object detection (SSOD) aims to improve the performance and generalization of existing object detectors.
Recent SSOD methods are still challenged by inadequate model refinement using the classical exponential moving average (EMA) strategy.
This paper proposes a novel training-based model refinement stage and a simple yet effective representation disagreement (RD) strategy.
arXiv Detail & Related papers (2023-07-25T18:26:22Z) - Towards More Robust and Accurate Sequential Recommendation with
Cascade-guided Adversarial Training [54.56998723843911]
Two properties unique to the nature of sequential recommendation models may impair their robustness.
We propose Cascade-guided Adversarial training, a new adversarial training procedure that is specifically designed for sequential recommendation models.
arXiv Detail & Related papers (2023-04-11T20:55:02Z) - TWINS: A Fine-Tuning Framework for Improved Transferability of
Adversarial Robustness and Generalization [89.54947228958494]
This paper focuses on the fine-tuning of an adversarially pre-trained model in various classification tasks.
We propose a novel statistics-based approach, Two-WIng NormliSation (TWINS) fine-tuning framework.
TWINS is shown to be effective on a wide range of image classification datasets in terms of both generalization and robustness.
arXiv Detail & Related papers (2023-03-20T14:12:55Z) - Are Sample-Efficient NLP Models More Robust? [90.54786862811183]
We investigate the relationship between sample efficiency (amount of data needed to reach a given ID accuracy) and robustness (how models fare on OOD evaluation)
We find that higher sample efficiency is only correlated with better average OOD robustness on some modeling interventions and tasks, but not others.
These results suggest that general-purpose methods for improving sample efficiency are unlikely to yield universal OOD robustness improvements, since such improvements are highly dataset- and task-dependent.
arXiv Detail & Related papers (2022-10-12T17:54:59Z) - S^3-Rec: Self-Supervised Learning for Sequential Recommendation with
Mutual Information Maximization [104.87483578308526]
We propose the model S3-Rec, which stands for Self-Supervised learning for Sequential Recommendation.
For our task, we devise four auxiliary self-supervised objectives to learn the correlations among attribute, item, subsequence, and sequence.
Extensive experiments conducted on six real-world datasets demonstrate the superiority of our proposed method over existing state-of-the-art methods.
arXiv Detail & Related papers (2020-08-18T11:44:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.