Towards More Robust and Accurate Sequential Recommendation with
Cascade-guided Adversarial Training
- URL: http://arxiv.org/abs/2304.05492v2
- Date: Tue, 16 Jan 2024 18:37:59 GMT
- Title: Towards More Robust and Accurate Sequential Recommendation with
Cascade-guided Adversarial Training
- Authors: Juntao Tan, Shelby Heinecke, Zhiwei Liu, Yongjun Chen, Yongfeng Zhang,
Huan Wang
- Abstract summary: Two properties unique to the nature of sequential recommendation models may impair their robustness.
We propose Cascade-guided Adversarial training, a new adversarial training procedure that is specifically designed for sequential recommendation models.
- Score: 54.56998723843911
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sequential recommendation models, models that learn from chronological
user-item interactions, outperform traditional recommendation models in many
settings. Despite the success of sequential recommendation models, their
robustness has recently come into question. Two properties unique to the nature
of sequential recommendation models may impair their robustness - the cascade
effects induced during training and the model's tendency to rely too heavily on
temporal information. To address these vulnerabilities, we propose
Cascade-guided Adversarial training, a new adversarial training procedure that
is specifically designed for sequential recommendation models. Our approach
harnesses the intrinsic cascade effects present in sequential modeling to
produce strategic adversarial perturbations to item embeddings during training.
Experiments on training state-of-the-art sequential models on four public
datasets from different domains show that our training approach produces
superior model ranking accuracy and superior model robustness to real item
replacement perturbations when compared to both standard model training and
generic adversarial training.
Related papers
- Sustainable Self-evolution Adversarial Training [51.25767996364584]
We propose a Sustainable Self-Evolution Adversarial Training (SSEAT) framework for adversarial training defense models.
We introduce a continual adversarial defense pipeline to realize learning from various kinds of adversarial examples.
We also propose an adversarial data replay module to better select more diverse and key relearning data.
arXiv Detail & Related papers (2024-12-03T08:41:11Z) - Towards Efficient Modelling of String Dynamics: A Comparison of State Space and Koopman based Deep Learning Methods [8.654571696634825]
State Space Models (SSM) and Koopman-based deep learning methods for modelling the dynamics of both linear and non-linear stiff strings.
Our findings indicate that our proposed Koopman-based model performs as well as or better than other existing approaches in non-linear cases for long-sequence modelling.
This research contributes insights into the physical modelling of dynamical systems by offering a comparative overview of these and previous methods and introducing innovative strategies for model improvement.
arXiv Detail & Related papers (2024-08-29T15:55:27Z) - TWINS: A Fine-Tuning Framework for Improved Transferability of
Adversarial Robustness and Generalization [89.54947228958494]
This paper focuses on the fine-tuning of an adversarially pre-trained model in various classification tasks.
We propose a novel statistics-based approach, Two-WIng NormliSation (TWINS) fine-tuning framework.
TWINS is shown to be effective on a wide range of image classification datasets in terms of both generalization and robustness.
arXiv Detail & Related papers (2023-03-20T14:12:55Z) - SADT: Combining Sharpness-Aware Minimization with Self-Distillation for
Improved Model Generalization [4.365720395124051]
Methods for improving deep neural network training times and model generalizability consist of various data augmentation, regularization, and optimization approaches.
This work jointly considers two recent training strategies that address model generalizability: sharpness-aware, minimization, and self-distillation.
The experimental section of this work shows that SADT consistently outperforms previously published training strategies in model convergence time, test-time performance, and model generalizability.
arXiv Detail & Related papers (2022-11-01T07:30:53Z) - Effective and Efficient Training for Sequential Recommendation using
Recency Sampling [91.02268704681124]
We propose a novel Recency-based Sampling of Sequences training objective.
We show that the models enhanced with our method can achieve performances exceeding or very close to stateof-the-art BERT4Rec.
arXiv Detail & Related papers (2022-07-06T13:06:31Z) - DST: Dynamic Substitute Training for Data-free Black-box Attack [79.61601742693713]
We propose a novel dynamic substitute training attack method to encourage substitute model to learn better and faster from the target model.
We introduce a task-driven graph-based structure information learning constrain to improve the quality of generated training data.
arXiv Detail & Related papers (2022-04-03T02:29:11Z) - Self-Ensemble Adversarial Training for Improved Robustness [14.244311026737666]
Adversarial training is the strongest strategy against various adversarial attacks among all sorts of defense methods.
Recent works mainly focus on developing new loss functions or regularizers, attempting to find the unique optimal point in the weight space.
We devise a simple but powerful emphSelf-Ensemble Adversarial Training (SEAT) method for yielding a robust classifier by averaging weights of history models.
arXiv Detail & Related papers (2022-03-18T01:12:18Z) - Regularizers for Single-step Adversarial Training [49.65499307547198]
We propose three types of regularizers that help to learn robust models using single-step adversarial training methods.
Regularizers mitigate the effect of gradient masking by harnessing on properties that differentiate a robust model from that of a pseudo robust model.
arXiv Detail & Related papers (2020-02-03T09:21:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.