Investigating the Robustness of Sequential Recommender Systems Against
Training Data Perturbations
- URL: http://arxiv.org/abs/2307.13165v2
- Date: Wed, 27 Dec 2023 13:41:16 GMT
- Title: Investigating the Robustness of Sequential Recommender Systems Against
Training Data Perturbations
- Authors: Filippo Betello, Federico Siciliano, Pushkar Mishra, Fabrizio
Silvestri
- Abstract summary: We introduce Finite Rank-Biased Overlap (FRBO), an enhanced similarity tailored explicitly for finite rankings.
We empirically investigate the impact of removing items at different positions within a temporally ordered sequence.
Our results demonstrate that removing items at the end of the sequence has a statistically significant impact on performance.
- Score: 9.463133630647569
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sequential Recommender Systems (SRSs) are widely employed to model user
behavior over time. However, their robustness in the face of perturbations in
training data remains a largely understudied yet critical issue. A fundamental
challenge emerges in previous studies aimed at assessing the robustness of
SRSs: the Rank-Biased Overlap (RBO) similarity is not particularly suited for
this task as it is designed for infinite rankings of items and thus shows
limitations in real-world scenarios. For instance, it fails to achieve a
perfect score of 1 for two identical finite-length rankings. To address this
challenge, we introduce a novel contribution: Finite Rank-Biased Overlap
(FRBO), an enhanced similarity tailored explicitly for finite rankings. This
innovation facilitates a more intuitive evaluation in practical settings. In
pursuit of our goal, we empirically investigate the impact of removing items at
different positions within a temporally ordered sequence. We evaluate two
distinct SRS models across multiple datasets, measuring their performance using
metrics such as Normalized Discounted Cumulative Gain (NDCG) and Rank List
Sensitivity. Our results demonstrate that removing items at the end of the
sequence has a statistically significant impact on performance, with NDCG
decreasing up to 60%. Conversely, removing items from the beginning or middle
has no significant effect. These findings underscore the criticality of the
position of perturbed items in the training data. As we spotlight the
vulnerabilities inherent in current SRSs, we fervently advocate for intensified
research efforts to fortify their robustness against adversarial perturbations.
Related papers
- Dissecting Deep RL with High Update Ratios: Combatting Value Divergence [21.282292112642747]
We show that deep reinforcement learning algorithms can retain their ability to learn without resetting network parameters.
We employ a simple unit-ball normalization that enables learning under large update ratios.
arXiv Detail & Related papers (2024-03-09T19:56:40Z) - REValueD: Regularised Ensemble Value-Decomposition for Factorisable
Markov Decision Processes [7.2129390689756185]
Discrete-action reinforcement learning algorithms often falter in tasks with high-dimensional discrete action spaces.
This study delves deep into the effects of value-decomposition, revealing that it amplifies target variance.
We introduce a regularisation loss that helps to mitigate the effects that exploratory actions in one dimension can have on the value of optimal actions in other dimensions.
Our novel algorithm, REValueD, tested on discretised versions of the DeepMind Control Suite tasks, showcases superior performance.
arXiv Detail & Related papers (2024-01-16T21:47:23Z) - Perturbation-Invariant Adversarial Training for Neural Ranking Models:
Improving the Effectiveness-Robustness Trade-Off [107.35833747750446]
adversarial examples can be crafted by adding imperceptible perturbations to legitimate documents.
This vulnerability raises significant concerns about their reliability and hinders the widespread deployment of NRMs.
In this study, we establish theoretical guarantees regarding the effectiveness-robustness trade-off in NRMs.
arXiv Detail & Related papers (2023-12-16T05:38:39Z) - Re-Evaluating LiDAR Scene Flow for Autonomous Driving [80.37947791534985]
Popular benchmarks for self-supervised LiDAR scene flow have unrealistic rates of dynamic motion, unrealistic correspondences, and unrealistic sampling patterns.
We evaluate a suite of top methods on a suite of real-world datasets.
We show that despite the emphasis placed on learning, most performance gains are caused by pre- and post-processing steps.
arXiv Detail & Related papers (2023-04-04T22:45:50Z) - Locality-aware Attention Network with Discriminative Dynamics Learning
for Weakly Supervised Anomaly Detection [0.8883733362171035]
We propose a Discriminative Dynamics Learning (DDL) method with two objective functions, i.e., dynamics ranking loss and dynamics alignment loss.
A Locality-aware Attention Network (LA-Net) is constructed to capture global correlations and re-calibrate the location preference across snippets, followed by a multilayer perceptron with causal convolution to obtain anomaly scores.
arXiv Detail & Related papers (2022-08-11T04:27:33Z) - ReAct: Temporal Action Detection with Relational Queries [84.76646044604055]
This work aims at advancing temporal action detection (TAD) using an encoder-decoder framework with action queries.
We first propose a relational attention mechanism in the decoder, which guides the attention among queries based on their relations.
Lastly, we propose to predict the localization quality of each action query at inference in order to distinguish high-quality queries.
arXiv Detail & Related papers (2022-07-14T17:46:37Z) - AdAUC: End-to-end Adversarial AUC Optimization Against Long-tail
Problems [102.95119281306893]
We present an early trial to explore adversarial training methods to optimize AUC.
We reformulate the AUC optimization problem as a saddle point problem, where the objective becomes an instance-wise function.
Our analysis differs from the existing studies since the algorithm is asked to generate adversarial examples by calculating the gradient of a min-max problem.
arXiv Detail & Related papers (2022-06-24T09:13:39Z) - Scale-Equivalent Distillation for Semi-Supervised Object Detection [57.59525453301374]
Recent Semi-Supervised Object Detection (SS-OD) methods are mainly based on self-training, generating hard pseudo-labels by a teacher model on unlabeled data as supervisory signals.
We analyze the challenges these methods meet with the empirical experiment results.
We introduce a novel approach, Scale-Equivalent Distillation (SED), which is a simple yet effective end-to-end knowledge distillation framework robust to large object size variance and class imbalance.
arXiv Detail & Related papers (2022-03-23T07:33:37Z) - WSLRec: Weakly Supervised Learning for Neural Sequential Recommendation
Models [24.455665093145818]
We propose a novel model-agnostic training approach called WSLRec, which adopts a three-stage framework: pre-training, top-$k$ mining, intrinsic and fine-tuning.
WSLRec resolves the incompleteness problem by pre-training models on extra weak supervisions from model-free methods like BR and ItemCF, while resolving the inaccuracy problem by leveraging the top-$k$ mining to screen out reliable user-item relevance from weak supervisions for fine-tuning.
arXiv Detail & Related papers (2022-02-28T08:55:12Z) - Enhancing Counterfactual Classification via Self-Training [9.484178349784264]
We propose a self-training algorithm which imputes outcomes with categorical values for finite unseen actions in observational data to simulate a randomized trial through pseudolabeling.
We demonstrate the effectiveness of the proposed algorithms on both synthetic and real datasets.
arXiv Detail & Related papers (2021-12-08T18:42:58Z) - Cascaded Regression Tracking: Towards Online Hard Distractor
Discrimination [202.2562153608092]
We propose a cascaded regression tracker with two sequential stages.
In the first stage, we filter out abundant easily-identified negative candidates.
In the second stage, a discrete sampling based ridge regression is designed to double-check the remaining ambiguous hard samples.
arXiv Detail & Related papers (2020-06-18T07:48:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.