Adversarial learning for product recommendation
- URL: http://arxiv.org/abs/2007.07269v2
- Date: Tue, 1 Sep 2020 15:01:29 GMT
- Title: Adversarial learning for product recommendation
- Authors: Joel R. Bock and Akhilesh Maewal
- Abstract summary: This work proposes a conditional, coupled generative adversarial network (RecommenderGAN) that learns to produce samples from a joint distribution between (view, buy) behaviors.
Our results are preliminary, however they suggest that the recommendations produced by the model may provide utility for consumers and digital retailers.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Product recommendation can be considered as a problem in data fusion--
estimation of the joint distribution between individuals, their behaviors, and
goods or services of interest. This work proposes a conditional, coupled
generative adversarial network (RecommenderGAN) that learns to produce samples
from a joint distribution between (view, buy) behaviors found in extremely
sparse implicit feedback training data. User interaction is represented by two
matrices having binary-valued elements. In each matrix, nonzero values indicate
whether a user viewed or bought a specific item in a given product category,
respectively. By encoding actions in this manner, the model is able to
represent entire, large scale product catalogs. Conversion rate statistics
computed on trained GAN output samples ranged from 1.323 to 1.763 percent.
These statistics are found to be significant in comparison to null hypothesis
testing results. The results are shown comparable to published conversion rates
aggregated across many industries and product types. Our results are
preliminary, however they suggest that the recommendations produced by the
model may provide utility for consumers and digital retailers.
Related papers
- Evaluating Performance and Bias of Negative Sampling in Large-Scale Sequential Recommendation Models [0.0]
Large-scale industrial recommendation models predict the most relevant items from catalogs containing millions or billions of options.
To train these models efficiently, a small set of irrelevant items (negative samples) is selected from the vast catalog for each relevant item.
Our study serves as a practical guide to the trade-offs in selecting a negative sampling method for large-scale sequential recommendation models.
arXiv Detail & Related papers (2024-10-08T00:23:17Z) - Data Distribution Valuation [56.71023681599737]
Existing data valuation methods define a value for a discrete dataset.
In many use cases, users are interested in not only the value of the dataset, but that of the distribution from which the dataset was sampled.
We propose a maximum mean discrepancy (MMD)-based valuation method which enables theoretically principled and actionable policies.
arXiv Detail & Related papers (2024-10-06T07:56:53Z) - Consistent Text Categorization using Data Augmentation in e-Commerce [1.558017967663767]
We propose a new framework for consistent text categorization.
Our goal is to improve the model's consistency while maintaining its production-level performance.
arXiv Detail & Related papers (2023-05-09T12:47:28Z) - Learning Consumer Preferences from Bundle Sales Data [2.6899658723618005]
We propose an approach to learn the distribution of consumers' valuations toward the products using bundle sales data.
Using the EM algorithm and Monte Carlo simulation, our approach can recover the distribution of consumers' valuations.
arXiv Detail & Related papers (2022-09-11T21:42:49Z) - Generating Negative Samples for Sequential Recommendation [83.60655196391855]
We propose to Generate Negative Samples (items) for Sequential Recommendation (SR)
A negative item is sampled at each time step based on the current SR model's learned user preferences toward items.
Experiments on four public datasets verify the importance of providing high-quality negative samples for SR.
arXiv Detail & Related papers (2022-08-07T05:44:13Z) - Understanding, Detecting, and Separating Out-of-Distribution Samples and
Adversarial Samples in Text Classification [80.81532239566992]
We compare the two types of anomalies (OOD and Adv samples) with the in-distribution (ID) ones from three aspects.
We find that OOD samples expose their aberration starting from the first layer, while the abnormalities of Adv samples do not emerge until the deeper layers of the model.
We propose a simple method to separate ID, OOD, and Adv samples using the hidden representations and output probabilities of the model.
arXiv Detail & Related papers (2022-04-09T12:11:59Z) - Learning to Recommend Using Non-Uniform Data [7.005458308454873]
Learning user preferences for products based on past purchases or reviews is at the cornerstone of modern recommendation engines.
Some users are more likely to purchase products or review them, and some products are more likely to be purchased or reviewed by the users.
This non-uniform pattern degrades the power of many existing recommendation algorithms.
arXiv Detail & Related papers (2021-10-21T16:17:40Z) - Modeling Sequences as Distributions with Uncertainty for Sequential
Recommendation [63.77513071533095]
Most existing sequential methods assume users are deterministic.
Item-item transitions might fluctuate significantly in several item aspects and exhibit randomness of user interests.
We propose a Distribution-based Transformer Sequential Recommendation (DT4SR) which injects uncertainties into sequential modeling.
arXiv Detail & Related papers (2021-06-11T04:35:21Z) - Set2setRank: Collaborative Set to Set Ranking for Implicit Feedback
based Recommendation [59.183016033308014]
In this paper, we explore the unique characteristics of the implicit feedback and propose Set2setRank framework for recommendation.
Our proposed framework is model-agnostic and can be easily applied to most recommendation prediction approaches.
arXiv Detail & Related papers (2021-05-16T08:06:22Z) - Pre-training Graph Transformer with Multimodal Side Information for
Recommendation [82.4194024706817]
We propose a pre-training strategy to learn item representations by considering both item side information and their relationships.
We develop a novel sampling algorithm named MCNSampling to select contextual neighbors for each item.
The proposed Pre-trained Multimodal Graph Transformer (PMGT) learns item representations with two objectives: 1) graph structure reconstruction, and 2) masked node feature reconstruction.
arXiv Detail & Related papers (2020-10-23T10:30:24Z) - Counterfactual Inference for Consumer Choice Across Many Product
Categories [6.347014958509367]
We build on techniques from the machine learning literature on probabilistic models of matrix factorization.
We show that our model improves over traditional modeling approaches that consider each category in isolation.
Using held-out data, we show that our model can accurately distinguish which consumers are most price sensitive to a given product.
arXiv Detail & Related papers (2019-06-06T15:11:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.