Adversarial Collaborative Filtering for Free
- URL: http://arxiv.org/abs/2308.13541v1
- Date: Sun, 20 Aug 2023 19:25:38 GMT
- Title: Adversarial Collaborative Filtering for Free
- Authors: Huiyuan Chen, Xiaoting Li, Vivian Lai, Chin-Chia Michael Yeh, Yujie
Fan, Yan Zheng, Mahashweta Das, Hao Yang
- Abstract summary: Collaborative Filtering (CF) has been successfully used to help users discover the items of interest.
Existing methods suffer from noisy data issue, which negatively impacts the quality of recommendation.
We present Sharpness-aware Collaborative Filtering (CF), a simple yet effective method that conducts adversarial training without extra computational cost over the base.
- Score: 27.949683060138064
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Collaborative Filtering (CF) has been successfully used to help users
discover the items of interest. Nevertheless, existing CF methods suffer from
noisy data issue, which negatively impacts the quality of recommendation. To
tackle this problem, many prior studies leverage adversarial learning to
regularize the representations of users/items, which improves both
generalizability and robustness. Those methods often learn adversarial
perturbations and model parameters under min-max optimization framework.
However, there still have two major drawbacks: 1) Existing methods lack
theoretical guarantees of why adding perturbations improve the model
generalizability and robustness; 2) Solving min-max optimization is
time-consuming. In addition to updating the model parameters, each iteration
requires additional computations to update the perturbations, making them not
scalable for industry-scale datasets.
In this paper, we present Sharpness-aware Collaborative Filtering (SharpCF),
a simple yet effective method that conducts adversarial training without extra
computational cost over the base optimizer. To achieve this goal, we first
revisit the existing adversarial collaborative filtering and discuss its
connection with recent Sharpness-aware Minimization. This analysis shows that
adversarial training actually seeks model parameters that lie in neighborhoods
around the optimal model parameters having uniformly low loss values, resulting
in better generalizability. To reduce the computational overhead, SharpCF
introduces a novel trajectory loss to measure the alignment between current
weights and past weights. Experimental results on real-world datasets
demonstrate that our SharpCF achieves superior performance with almost zero
additional computational cost comparing to adversarial training.
Related papers
- PUMA: margin-based data pruning [51.12154122266251]
We focus on data pruning, where some training samples are removed based on the distance to the model classification boundary (i.e., margin)
We propose PUMA, a new data pruning strategy that computes the margin using DeepFool.
We show that PUMA can be used on top of the current state-of-the-art methodology in robustness, and it is able to significantly improve the model performance unlike the existing data pruning strategies.
arXiv Detail & Related papers (2024-05-10T08:02:20Z) - Gradient constrained sharpness-aware prompt learning for vision-language
models [99.74832984957025]
This paper targets a novel trade-off problem in generalizable prompt learning for vision-language models (VLM)
By analyzing the loss landscapes of the state-of-the-art method and vanilla Sharpness-aware Minimization (SAM) based method, we conclude that the trade-off performance correlates to both loss value and loss sharpness.
We propose a novel SAM-based method for prompt learning, denoted as Gradient Constrained Sharpness-aware Context Optimization (GCSCoOp)
arXiv Detail & Related papers (2023-09-14T17:13:54Z) - Improved Distribution Matching for Dataset Condensation [91.55972945798531]
We propose a novel dataset condensation method based on distribution matching.
Our simple yet effective method outperforms most previous optimization-oriented methods with much fewer computational resources.
arXiv Detail & Related papers (2023-07-19T04:07:33Z) - Recommendation Unlearning via Influence Function [42.4931807753579]
We propose a new Influence Function-based Recommendation Unlearning (IFRU) framework, which efficiently updates the model without retraining.
IFRU achieves more than 250 times acceleration compared to retraining-based methods with recommendation performance comparable to full retraining.
arXiv Detail & Related papers (2023-07-05T09:42:51Z) - Support Vector Machines with the Hard-Margin Loss: Optimal Training via
Combinatorial Benders' Cuts [8.281391209717105]
We show how to train the hard-margin SVM model to global optimality.
We introduce an iterative sampling and sub decomposition algorithm that solves the problem.
arXiv Detail & Related papers (2022-07-15T18:21:51Z) - RoCourseNet: Distributionally Robust Training of a Prediction Aware
Recourse Model [29.057300578765663]
RoCourseNet is a training framework that jointly optimize predictions and recourses that are robust to future data shifts.
We show that RoCourseNet consistently achieves more than 96% robust validity and outperforms state-of-the-art baselines by at least 10% in generating robust explanations.
arXiv Detail & Related papers (2022-06-01T18:18:18Z) - DualCF: Efficient Model Extraction Attack from Counterfactual
Explanations [57.46134660974256]
Cloud service providers have launched Machine-Learning-as-a-Service platforms to allow users to access large-scale cloudbased models via APIs.
Such extra information inevitably causes the cloud models to be more vulnerable to extraction attacks.
We propose a novel simple yet efficient querying strategy to greatly enhance the querying efficiency to steal a classification model.
arXiv Detail & Related papers (2022-05-13T08:24:43Z) - Sharpness-Aware Minimization for Efficiently Improving Generalization [36.87818971067698]
We introduce a novel, effective procedure for simultaneously minimizing loss value and loss sharpness.
Sharpness-Aware Minimization (SAM) seeks parameters that lie in neighborhoods having uniformly low loss.
We present empirical results showing that SAM improves model generalization across a variety of benchmark datasets.
arXiv Detail & Related papers (2020-10-03T19:02:10Z) - Neural Model-based Optimization with Right-Censored Observations [42.530925002607376]
Neural networks (NNs) have been demonstrated to work well at the core of model-based optimization procedures.
We show that our trained regression models achieve a better predictive quality than several baselines.
arXiv Detail & Related papers (2020-09-29T07:32:30Z) - Prior Guided Feature Enrichment Network for Few-Shot Segmentation [64.91560451900125]
State-of-the-art semantic segmentation methods require sufficient labeled data to achieve good results.
Few-shot segmentation is proposed to tackle this problem by learning a model that quickly adapts to new classes with a few labeled support samples.
Theses frameworks still face the challenge of generalization ability reduction on unseen classes due to inappropriate use of high-level semantic information.
arXiv Detail & Related papers (2020-08-04T10:41:32Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.