Sampling-Decomposable Generative Adversarial Recommender
- URL: http://arxiv.org/abs/2011.00956v1
- Date: Mon, 2 Nov 2020 13:19:10 GMT
- Title: Sampling-Decomposable Generative Adversarial Recommender
- Authors: Binbin Jin, Defu Lian, Zheng Liu, Qi Liu, Jianhui Ma, Xing Xie, Enhong
Chen
- Abstract summary: We propose a Sampling-Decomposable Generative Adversarial Recommender (SD-GAR)
In the framework, the divergence between some generator and the optimum is compensated by self-normalized importance sampling.
We extensively evaluate the proposed algorithm with five real-world recommendation datasets.
- Score: 84.05894139540048
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recommendation techniques are important approaches for alleviating
information overload. Being often trained on implicit user feedback, many
recommenders suffer from the sparsity challenge due to the lack of explicitly
negative samples. The GAN-style recommenders (i.e., IRGAN) addresses the
challenge by learning a generator and a discriminator adversarially, such that
the generator produces increasingly difficult samples for the discriminator to
accelerate optimizing the discrimination objective. However, producing samples
from the generator is very time-consuming, and our empirical study shows that
the discriminator performs poor in top-k item recommendation. To this end, a
theoretical analysis is made for the GAN-style algorithms, showing that the
generator of limit capacity is diverged from the optimal generator. This may
interpret the limitation of discriminator's performance. Based on these
findings, we propose a Sampling-Decomposable Generative Adversarial Recommender
(SD-GAR). In the framework, the divergence between some generator and the
optimum is compensated by self-normalized importance sampling; the efficiency
of sample generation is improved with a sampling-decomposable generator, such
that each sample can be generated in O(1) with the Vose-Alias method.
Interestingly, due to decomposability of sampling, the generator can be
optimized with the closed-form solutions in an alternating manner, being
different from policy gradient in the GAN-style algorithms. We extensively
evaluate the proposed algorithm with five real-world recommendation datasets.
The results show that SD-GAR outperforms IRGAN by 12.4% and the SOTA
recommender by 10% on average. Moreover, discriminator training can be 20x
faster on the dataset with more than 120K items.
Related papers
- Adversarial Likelihood Estimation With One-Way Flows [44.684952377918904]
Generative Adversarial Networks (GANs) can produce high-quality samples, but do not provide an estimate of the probability density around the samples.
We show that our method converges faster, produces comparable sample quality to GANs with similar architecture, successfully avoids over-fitting to commonly used datasets and produces smooth low-dimensional latent representations of the training data.
arXiv Detail & Related papers (2023-07-19T10:26:29Z) - GANs Settle Scores! [16.317645727944466]
We propose a unified approach to analyzing the generator optimization through variational approach.
In $f$-divergence-minimizing GANs, we show that the optimal generator is the one that matches the score of its output distribution with that of the data distribution.
We propose novel alternatives to $f$-GAN and IPM-GAN training based on score and flow matching, and discriminator-guided Langevin sampling.
arXiv Detail & Related papers (2023-06-02T16:24:07Z) - Reparameterized Sampling for Generative Adversarial Networks [71.30132908130581]
We propose REP-GAN, a novel sampling method that allows general dependent proposals by REizing the Markov chains into the latent space of the generator.
Empirically, extensive experiments on synthetic and real datasets demonstrate that our REP-GAN largely improves the sample efficiency and obtains better sample quality simultaneously.
arXiv Detail & Related papers (2021-07-01T10:34:55Z) - Local policy search with Bayesian optimization [73.0364959221845]
Reinforcement learning aims to find an optimal policy by interaction with an environment.
Policy gradients for local search are often obtained from random perturbations.
We develop an algorithm utilizing a probabilistic model of the objective function and its gradient.
arXiv Detail & Related papers (2021-06-22T16:07:02Z) - Bandit Samplers for Training Graph Neural Networks [63.17765191700203]
Several sampling algorithms with variance reduction have been proposed for accelerating the training of Graph Convolution Networks (GCNs)
These sampling algorithms are not applicable to more general graph neural networks (GNNs) where the message aggregator contains learned weights rather than fixed weights, such as Graph Attention Networks (GAT)
arXiv Detail & Related papers (2020-06-10T12:48:37Z) - Discriminator Contrastive Divergence: Semi-Amortized Generative Modeling
by Exploring Energy of the Discriminator [85.68825725223873]
Generative Adversarial Networks (GANs) have shown great promise in modeling high dimensional data.
We introduce the Discriminator Contrastive Divergence, which is well motivated by the property of WGAN's discriminator.
We demonstrate the benefits of significant improved generation on both synthetic data and several real-world image generation benchmarks.
arXiv Detail & Related papers (2020-04-05T01:50:16Z) - Your GAN is Secretly an Energy-based Model and You Should use
Discriminator Driven Latent Sampling [106.68533003806276]
We show that sampling in latent space can be achieved by sampling in latent space according to an energy-based model induced by the sum of the latent prior log-density and the discriminator output score.
We show that Discriminator Driven Latent Sampling(DDLS) is highly efficient compared to previous methods which work in the high-dimensional pixel space.
arXiv Detail & Related papers (2020-03-12T23:33:50Z) - Discriminative Adversarial Search for Abstractive Summarization [29.943949944682196]
We introduce a novel approach for sequence decoding, Discriminative Adversarial Search (DAS)
DAS has the desirable properties of alleviating the effects of exposure bias without requiring external metrics.
We investigate the effectiveness of the proposed approach on the task of Abstractive Summarization.
arXiv Detail & Related papers (2020-02-24T17:07:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.