Variation Control and Evaluation for Generative SlateRecommendations
- URL: http://arxiv.org/abs/2102.13302v1
- Date: Fri, 26 Feb 2021 05:04:40 GMT
- Title: Variation Control and Evaluation for Generative SlateRecommendations
- Authors: Shuchang Liu, Fei Sun, Yingqiang Ge, Changhua Pei, Yongfeng Zhang
- Abstract summary: We show that item perturbation can enforce slate variation and mitigate the over-concentration of generated slates.
We also propose to separate a pivot selection phase from the generation process so that the model can apply perturbation before generation.
- Score: 22.533997063750597
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Slate recommendation generates a list of items as a whole instead of ranking
each item individually, so as to better model the intra-list positional biases
and item relations. In order to deal with the enormous combinatorial space of
slates, recent work considers a generative solution so that a slate
distribution can be directly modeled. However, we observe that such approaches
-- despite their proved effectiveness in computer vision -- suffer from a
trade-off dilemma in recommender systems: when focusing on reconstruction, they
easily over-fit the data and hardly generate satisfactory recommendations; on
the other hand, when focusing on satisfying the user interests, they get
trapped in a few items and fail to cover the item variation in slates. In this
paper, we propose to enhance the accuracy-based evaluation with slate variation
metrics to estimate the stochastic behavior of generative models. We illustrate
that instead of reaching to one of the two undesirable extreme cases in the
dilemma, a valid generative solution resides in a narrow "elbow" region in
between. And we show that item perturbation can enforce slate variation and
mitigate the over-concentration of generated slates, which expand the "elbow"
performance to an easy-to-find region. We further propose to separate a pivot
selection phase from the generation process so that the model can apply
perturbation before generation. Empirical results show that this simple
modification can provide even better variance with the same level of accuracy
compared to post-generation perturbation methods.
Related papers
- Prototype Clustered Diffusion Models for Versatile Inverse Problems [11.55838697574475]
We show that the measurement-based likelihood can be renovated with restoration-based likelihood via the opposite probabilistic graphic direction.
We can resolve inverse problems with bunch of choices for assorted sample quality and realize the proficient deterioration control with assured realistic.
arXiv Detail & Related papers (2024-07-13T04:24:53Z) - Distributionally Robust Recourse Action [12.139222986297263]
A recourse action aims to explain a particular algorithmic decision by showing one specific way in which the instance could be modified to receive an alternate outcome.
We propose the Distributionally Robust Recourse Action (DiRRAc) framework, which generates a recourse action that has a high probability of being valid under a mixture of model shifts.
arXiv Detail & Related papers (2023-02-22T08:52:01Z) - Rethinking Missing Data: Aleatoric Uncertainty-Aware Recommendation [59.500347564280204]
We propose a new Aleatoric Uncertainty-aware Recommendation (AUR) framework.
AUR consists of a new uncertainty estimator along with a normal recommender model.
As the chance of mislabeling reflects the potential of a pair, AUR makes recommendations according to the uncertainty.
arXiv Detail & Related papers (2022-09-22T04:32:51Z) - Fine-grained Retrieval Prompt Tuning [149.9071858259279]
Fine-grained Retrieval Prompt Tuning steers a frozen pre-trained model to perform the fine-grained retrieval task from the perspectives of sample prompt and feature adaptation.
Our FRPT with fewer learnable parameters achieves the state-of-the-art performance on three widely-used fine-grained datasets.
arXiv Detail & Related papers (2022-07-29T04:10:04Z) - Sequential Recommendation via Stochastic Self-Attention [68.52192964559829]
Transformer-based approaches embed items as vectors and use dot-product self-attention to measure the relationship between items.
We propose a novel textbfSTOchastic textbfSelf-textbfAttention(STOSA) to overcome these issues.
We devise a novel Wasserstein Self-Attention module to characterize item-item position-wise relationships in sequences.
arXiv Detail & Related papers (2022-01-16T12:38:45Z) - Modeling Sequences as Distributions with Uncertainty for Sequential
Recommendation [63.77513071533095]
Most existing sequential methods assume users are deterministic.
Item-item transitions might fluctuate significantly in several item aspects and exhibit randomness of user interests.
We propose a Distribution-based Transformer Sequential Recommendation (DT4SR) which injects uncertainties into sequential modeling.
arXiv Detail & Related papers (2021-06-11T04:35:21Z) - A Twin Neural Model for Uplift [59.38563723706796]
Uplift is a particular case of conditional treatment effect modeling.
We propose a new loss function defined by leveraging a connection with the Bayesian interpretation of the relative risk.
We show our proposed method is competitive with the state-of-the-art in simulation setting and on real data from large scale randomized experiments.
arXiv Detail & Related papers (2021-05-11T16:02:39Z) - Deconfounding Scores: Feature Representations for Causal Effect
Estimation with Weak Overlap [140.98628848491146]
We introduce deconfounding scores, which induce better overlap without biasing the target of estimation.
We show that deconfounding scores satisfy a zero-covariance condition that is identifiable in observed data.
In particular, we show that this technique could be an attractive alternative to standard regularizations.
arXiv Detail & Related papers (2021-04-12T18:50:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.