RGRecSys: A Toolkit for Robustness Evaluation of Recommender Systems
- URL: http://arxiv.org/abs/2201.04399v1
- Date: Wed, 12 Jan 2022 10:32:53 GMT
- Title: RGRecSys: A Toolkit for Robustness Evaluation of Recommender Systems
- Authors: Zohreh Ovaisi, Shelby Heinecke, Jia Li, Yongfeng Zhang, Elena Zheleva,
Caiming Xiong
- Abstract summary: We propose a more holistic view of robustness for recommender systems that encompasses multiple dimensions.
We present a robustness evaluation toolkit, Robustness Gym for RecSys, that allows us to quickly and uniformly evaluate the robustness of recommender system models.
- Score: 100.54655931138444
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robust machine learning is an increasingly important topic that focuses on
developing models resilient to various forms of imperfect data. Due to the
pervasiveness of recommender systems in online technologies, researchers have
carried out several robustness studies focusing on data sparsity and profile
injection attacks. Instead, we propose a more holistic view of robustness for
recommender systems that encompasses multiple dimensions - robustness with
respect to sub-populations, transformations, distributional disparity, attack,
and data sparsity. While there are several libraries that allow users to
compare different recommender system models, there is no software library for
comprehensive robustness evaluation of recommender system models under
different scenarios. As our main contribution, we present a robustness
evaluation toolkit, Robustness Gym for RecSys (RGRecSys --
https://www.github.com/salesforce/RGRecSys), that allows us to quickly and
uniformly evaluate the robustness of recommender system models.
Related papers
- Robust Neural Information Retrieval: An Adversarial and Out-of-distribution Perspective [111.58315434849047]
robustness of neural information retrieval models (IR) models has garnered significant attention.
We view the robustness of IR to be a multifaceted concept, emphasizing its necessity against adversarial attacks, out-of-distribution (OOD) scenarios and performance variance.
We provide an in-depth discussion of existing methods, datasets, and evaluation metrics, shedding light on challenges and future directions in the era of large language models.
arXiv Detail & Related papers (2024-07-09T16:07:01Z) - Towards Robust Recommendation: A Review and an Adversarial Robustness Evaluation Library [27.50051402580845]
We provide a comprehensive overview of the robustness of recommender systems.
In this survey, we categorize the robustness of recommender systems into adversarial robustness and non-adversarial robustness.
We discuss the current challenges in the field of recommender system robustness and potential future research directions.
arXiv Detail & Related papers (2024-04-27T09:44:56Z) - Mirror Gradient: Towards Robust Multimodal Recommender Systems via
Exploring Flat Local Minima [54.06000767038741]
We analyze multimodal recommender systems from the novel perspective of flat local minima.
We propose a concise yet effective gradient strategy called Mirror Gradient (MG)
We find that the proposed MG can complement existing robust training methods and be easily extended to diverse advanced recommendation models.
arXiv Detail & Related papers (2024-02-17T12:27:30Z) - EASRec: Elastic Architecture Search for Efficient Long-term Sequential
Recommender Systems [82.76483989905961]
Current Sequential Recommender Systems (SRSs) suffer from computational and resource inefficiencies.
We develop the Elastic Architecture Search for Efficient Long-term Sequential Recommender Systems (EASRec)
EASRec introduces data-aware gates that leverage historical information from input data batch to improve the performance of the recommendation network.
arXiv Detail & Related papers (2024-02-01T07:22:52Z) - Model-free Reinforcement Learning with Stochastic Reward Stabilization
for Recommender Systems [20.395091290715502]
One user's feedback on the same item at different times is random.
We design two reward stabilization frameworks that replace the direct feedback with that learned by a supervised model.
arXiv Detail & Related papers (2023-08-25T08:42:45Z) - Improving Training Stability for Multitask Ranking Models in Recommender
Systems [21.410278930639617]
We show how to improve the training stability of a real-world multitask ranking model for YouTube recommendations.
We propose a new algorithm to mitigate the limitations of existing solutions.
arXiv Detail & Related papers (2023-02-17T23:04:56Z) - Recommendation Systems with Distribution-Free Reliability Guarantees [83.80644194980042]
We show how to return a set of items rigorously guaranteed to contain mostly good items.
Our procedure endows any ranking model with rigorous finite-sample control of the false discovery rate.
We evaluate our methods on the Yahoo! Learning to Rank and MSMarco datasets.
arXiv Detail & Related papers (2022-07-04T17:49:25Z) - RobustBench: a standardized adversarial robustness benchmark [84.50044645539305]
Key challenge in benchmarking robustness is that its evaluation is often error-prone leading to robustness overestimation.
We evaluate adversarial robustness with AutoAttack, an ensemble of white- and black-box attacks.
We analyze the impact of robustness on the performance on distribution shifts, calibration, out-of-distribution detection, fairness, privacy leakage, smoothness, and transferability.
arXiv Detail & Related papers (2020-10-19T17:06:18Z) - How to compare adversarial robustness of classifiers from a global
perspective [0.0]
Adversarial attacks undermine the reliability of and trust in machine learning models.
Point-wise measures for specific threat models are currently the most popular tool for comparing the robustness of classifiers.
In this work, we use recently proposed robustness curves to show that point-wise measures fail to capture important global properties.
arXiv Detail & Related papers (2020-04-22T22:07:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.