An Evaluation Study of Generative Adversarial Networks for Collaborative
Filtering
- URL: http://arxiv.org/abs/2201.01815v1
- Date: Wed, 5 Jan 2022 20:53:27 GMT
- Title: An Evaluation Study of Generative Adversarial Networks for Collaborative
Filtering
- Authors: Fernando Benjam\'in P\'erez Maurera, Maurizio Ferrari Dacrema, Paolo
Cremonesi
- Abstract summary: This work successfully replicates the results published in the original paper and discusses the impact of certain differences between the CFGAN framework and the model used in the original evaluation.
The work further expands the experimental analysis comparing CFGAN against a selection of simple and well-known properly optimized baselines, observing that CFGAN is not consistently competitive against them despite its high computational cost.
- Score: 75.83628561622287
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work explores the reproducibility of CFGAN. CFGAN and its family of
models (TagRec, MTPR, and CRGAN) learn to generate personalized and
fake-but-realistic rankings of preferences for top-N recommendations by using
previous interactions. This work successfully replicates the results published
in the original paper and discusses the impact of certain differences between
the CFGAN framework and the model used in the original evaluation. The absence
of random noise and the use of real user profiles as condition vectors leaves
the generator prone to learn a degenerate solution in which the output vector
is identical to the input vector, therefore, behaving essentially as a simple
autoencoder. The work further expands the experimental analysis comparing CFGAN
against a selection of simple and well-known properly optimized baselines,
observing that CFGAN is not consistently competitive against them despite its
high computational cost. To ensure the reproducibility of these analyses, this
work describes the experimental methodology and publishes all datasets and
source code.
Related papers
- Investigating the Robustness of Counterfactual Learning to Rank Models: A Reproducibility Study [61.64685376882383]
Counterfactual learning to rank (CLTR) has attracted extensive attention in the IR community for its ability to leverage massive logged user interaction data to train ranking models.
This paper investigates the robustness of existing CLTR models in complex and diverse situations.
We find that the DLA models and IPS-DCM show better robustness under various simulation settings than IPS-PBM and PRS with offline propensity estimation.
arXiv Detail & Related papers (2024-04-04T10:54:38Z) - Self-Consistent Decoding for More Factual Open Responses [28.184313177333642]
"Sample & Select" improves factuality by a 30% relative margin against decoders of DoLA, P-CRR, and S-CRR.
We collect human verifications of the generated summaries, confirming the factual superiority of our method.
arXiv Detail & Related papers (2024-03-01T17:31:09Z) - Challenging the Myth of Graph Collaborative Filtering: a Reasoned and Reproducibility-driven Analysis [50.972595036856035]
We present a code that successfully replicates results from six popular and recent graph recommendation models.
We compare these graph models with traditional collaborative filtering models that historically performed well in offline evaluations.
By investigating the information flow from users' neighborhoods, we aim to identify which models are influenced by intrinsic features in the dataset structure.
arXiv Detail & Related papers (2023-08-01T09:31:44Z) - GANs Settle Scores! [16.317645727944466]
We propose a unified approach to analyzing the generator optimization through variational approach.
In $f$-divergence-minimizing GANs, we show that the optimal generator is the one that matches the score of its output distribution with that of the data distribution.
We propose novel alternatives to $f$-GAN and IPM-GAN training based on score and flow matching, and discriminator-guided Langevin sampling.
arXiv Detail & Related papers (2023-06-02T16:24:07Z) - Revisiting the Evaluation of Image Synthesis with GANs [55.72247435112475]
This study presents an empirical investigation into the evaluation of synthesis performance, with generative adversarial networks (GANs) as a representative of generative models.
In particular, we make in-depth analyses of various factors, including how to represent a data point in the representation space, how to calculate a fair distance using selected samples, and how many instances to use from each set.
arXiv Detail & Related papers (2023-04-04T17:54:32Z) - Less is More: Mitigate Spurious Correlations for Open-Domain Dialogue
Response Generation Models by Causal Discovery [52.95935278819512]
We conduct the first study on spurious correlations for open-domain response generation models based on a corpus CGDIALOG curated in our work.
Inspired by causal discovery algorithms, we propose a novel model-agnostic method for training and inference of response generation model.
arXiv Detail & Related papers (2023-03-02T06:33:48Z) - The effectiveness of factorization and similarity blending [0.0]
Collaborative Filtering (CF) is a technique which allows to leverage past users' preferences data to identify behavioural patterns and exploit them to predict custom recommendations.
We show that blending factorization-based and similarity-based approaches can lead to a significant error decrease (-9.4%) on stand-alone models.
We propose a novel extension of a similarity model, SCSR, which consistently reduce the complexity of the original algorithm.
arXiv Detail & Related papers (2022-09-16T13:11:27Z) - One-Shot Adaptation of GAN in Just One CLIP [51.188396199083336]
We present a novel single-shot GAN adaptation method through unified CLIP space manipulations.
Specifically, our model employs a two-step training strategy: reference image search in the source generator using a CLIP-guided latent optimization.
We show that our model generates diverse outputs with the target texture and outperforms the baseline models both qualitatively and quantitatively.
arXiv Detail & Related papers (2022-03-17T13:03:06Z) - UQGAN: A Unified Model for Uncertainty Quantification of Deep
Classifiers trained via Conditional GANs [9.496524884855559]
We present an approach to quantifying uncertainty for deep neural networks in image classification, based on generative adversarial networks (GANs)
Instead of shielding the entire in-distribution data with GAN generated OoD examples, we shield each class separately with out-of-class examples generated by a conditional GAN.
In particular, we improve over the OoD detection and FP detection performance of state-of-the-art GAN-training based classifiers.
arXiv Detail & Related papers (2022-01-31T14:42:35Z) - Improving the quality of generative models through Smirnov
transformation [1.3492000366723798]
We propose a novel activation function to be used as output of the generator agent.
It is based on the Smirnov probabilistic transformation and it is specifically designed to improve the quality of the generated data.
arXiv Detail & Related papers (2021-10-29T17:01:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.