Dynamic Review-based Recommenders
- URL: http://arxiv.org/abs/2110.14747v1
- Date: Wed, 27 Oct 2021 20:17:47 GMT
- Title: Dynamic Review-based Recommenders
- Authors: Kostadin Cvejoski, Ramses J. Sanchez, Christian Bauckhage, Cesar Ojeda
- Abstract summary: We leverage the known power of reviews to enhance rating predictions in a way that respects the causality of review generation.
Our representations are time-interval aware and thus yield a continuous-time representation of the dynamics.
- Score: 1.5427245397603195
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Just as user preferences change with time, item reviews also reflect those
same preference changes. In a nutshell, if one is to sequentially incorporate
review content knowledge into recommender systems, one is naturally led to
dynamical models of text. In the present work we leverage the known power of
reviews to enhance rating predictions in a way that (i) respects the causality
of review generation and (ii) includes, in a bidirectional fashion, the ability
of ratings to inform language review models and vice-versa, language
representations that help predict ratings end-to-end. Moreover, our
representations are time-interval aware and thus yield a continuous-time
representation of the dynamics. We provide experiments on real-world datasets
and show that our methodology is able to outperform several state-of-the-art
models. Source code for all models can be found at [1].
Related papers
- Interpret the Internal States of Recommendation Model with Sparse Autoencoder [26.021277330699963]
RecSAE is an automatic, generalizable probing method for interpreting the internal states of Recommendation models.
We train an autoencoder with sparsity constraints to reconstruct internal activations of recommendation models.
We automated the construction of concept dictionaries based on the relationship between latent activations and input item sequences.
arXiv Detail & Related papers (2024-11-09T08:22:31Z) - Beyond Coarse-Grained Matching in Video-Text Retrieval [50.799697216533914]
We introduce a new approach for fine-grained evaluation.
Our approach can be applied to existing datasets by automatically generating hard negative test captions.
Experiments on our fine-grained evaluations demonstrate that this approach enhances a model's ability to understand fine-grained differences.
arXiv Detail & Related papers (2024-10-16T09:42:29Z) - Evaluation of Text-to-Video Generation Models: A Dynamics Perspective [94.2662603491163]
Existing evaluation protocols primarily focus on temporal consistency and content continuity.
We propose an effective evaluation protocol, termed DEVIL, which centers on the dynamics dimension to evaluate T2V models.
arXiv Detail & Related papers (2024-07-01T08:51:22Z) - Revisiting Dynamic Evaluation: Online Adaptation for Large Language
Models [88.47454470043552]
We consider the problem of online fine tuning the parameters of a language model at test time, also known as dynamic evaluation.
Online adaptation turns parameters into temporally changing states and provides a form of context-length extension with memory in weights.
arXiv Detail & Related papers (2024-03-03T14:03:48Z) - Human Learning by Model Feedback: The Dynamics of Iterative Prompting
with Midjourney [28.39697076030535]
This paper analyzes the dynamics of the user prompts along such iterations.
We show that prompts predictably converge toward specific traits along these iterations.
The possibility that users adapt to the model's preference raises concerns about reusing user data for further training.
arXiv Detail & Related papers (2023-11-20T19:28:52Z) - Bridging the Gap: A Survey on Integrating (Human) Feedback for Natural
Language Generation [68.9440575276396]
This survey aims to provide an overview of the recent research that has leveraged human feedback to improve natural language generation.
First, we introduce an encompassing formalization of feedback, and identify and organize existing research into a taxonomy following this formalization.
Second, we discuss how feedback can be described by its format and objective, and cover the two approaches proposed to use feedback (either for training or decoding): directly using the feedback or training feedback models.
Third, we provide an overview of the nascent field of AI feedback, which exploits large language models to make judgments based on a set of principles and minimize the need for
arXiv Detail & Related papers (2023-05-01T17:36:06Z) - On the Impact of Temporal Concept Drift on Model Explanations [31.390397997989712]
Explanation faithfulness of model predictions in natural language processing is evaluated on held-out data from the same temporal distribution as the training data.
We examine the impact of temporal variation on model explanations extracted by eight feature attribution methods and three select-then-predict models across six text classification tasks.
arXiv Detail & Related papers (2022-10-17T15:53:09Z) - On Faithfulness and Coherence of Language Explanations for
Recommendation Systems [8.143715142450876]
This work probes state-of-the-art models and their review generation component.
We show that the generated explanations are brittle and need further evaluation before being taken as literal rationales for the estimated ratings.
arXiv Detail & Related papers (2022-09-12T17:00:31Z) - Learning Dual Dynamic Representations on Time-Sliced User-Item
Interaction Graphs for Sequential Recommendation [62.30552176649873]
We devise a novel Dynamic Representation Learning model for Sequential Recommendation (DRL-SRe)
To better model the user-item interactions for characterizing the dynamics from both sides, the proposed model builds a global user-item interaction graph for each time slice.
To enable the model to capture fine-grained temporal information, we propose an auxiliary temporal prediction task over consecutive time slices.
arXiv Detail & Related papers (2021-09-24T07:44:27Z) - Aspect-Controllable Opinion Summarization [58.5308638148329]
We propose an approach that allows the generation of customized summaries based on aspect queries.
Using a review corpus, we create a synthetic training dataset of (review, summary) pairs enriched with aspect controllers.
We fine-tune a pretrained model using our synthetic dataset and generate aspect-specific summaries by modifying the aspect controllers.
arXiv Detail & Related papers (2021-09-07T16:09:17Z) - Recurrent Point Review Models [1.412197703754359]
We build on deep neural network models to incorporate temporal information and model how to review data changes with time.
We use the dynamic representations of recurrent point process models, which encode the history of how business or service reviews are received in time.
We deploy our methodologies in the context of recommender systems, effectively characterizing the change in preference and taste of users as time evolves.
arXiv Detail & Related papers (2020-12-10T14:11:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.