Discrete-event Tensor Factorization: Learning a Smooth Embedding for Continuous Domains
- URL: http://arxiv.org/abs/2508.04221v1
- Date: Wed, 06 Aug 2025 08:54:57 GMT
- Title: Discrete-event Tensor Factorization: Learning a Smooth Embedding for Continuous Domains
- Authors: Joey De Pauw, Bart Goethals,
- Abstract summary: This paper analyzes how time can be encoded in factorization-style recommendation models.<n>By including absolute time as a feature, our models can learn varying user preferences and changing item perception over time.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recommender systems learn from past user behavior to predict future user preferences. Intuitively, it has been established that the most recent interactions are more indicative of future preferences than older interactions. Many recommendation algorithms use this notion to either drop older interactions or to assign them a lower weight, so the model can focus on the more informative, recent information. However, very few approaches model the flow of time explicitly. This paper analyzes how time can be encoded in factorization-style recommendation models. By including absolute time as a feature, our models can learn varying user preferences and changing item perception over time. In addition to simple binning approaches, we also propose a novel, fully continuous time encoding mechanism. Through the use of a polynomial fit inside the loss function, our models completely avoid the need for discretization, and they are able to capture the time dimension in arbitrary resolution. We perform a comparative study on three real-world datasets that span multiple years, where long user histories are present, and items stay relevant for a longer time. Empirical results show that, by explicitly modeling time, our models are very effective at capturing temporal signals, such as varying item popularities over time. Despite this however, our experiments also indicate that a simple post-hoc popularity adjustment is often sufficient to achieve the best performance on the unseen test set. This teaches us that, for the recommendation task, predicting the future is more important than capturing past trends. As such, we argue that specialized mechanisms are needed for extrapolation to future data.
Related papers
- Measuring the stability and plasticity of recommender systems [0.4551615447454769]
We propose a methodology to study how recommendation models behave when they are retrained.<n>The idea is to profile algorithms according to their ability to retain past patterns.<n>Preliminary results show different stability and plasticity profiles depending on the algorithmic technique.
arXiv Detail & Related papers (2025-08-05T22:15:43Z) - Modeling the Heterogeneous Duration of User Interest in Time-Dependent Recommendation: A Hidden Semi-Markov Approach [11.392605386729699]
We propose a hidden semi-Markov model to track the change of users' interests.<n>This model allows for capturing the different durations of user stays in a (latent) interest state.<n>We derive an algorithm to estimate the parameters and predict users' actions.
arXiv Detail & Related papers (2024-12-15T09:17:45Z) - Sequential Recommendation on Temporal Proximities with Contrastive
Learning and Self-Attention [3.7182810519704095]
Sequential recommender systems identify user preferences from their past interactions to predict subsequent items optimally.
Recent models often neglect similarities in users' actions that occur implicitly among users during analogous timeframes.
We propose a sequential recommendation model called TemProxRec, which includes contrastive learning and self-attention methods to consider temporal proximities.
arXiv Detail & Related papers (2024-02-15T08:33:16Z) - Contrastive Difference Predictive Coding [79.74052624853303]
We introduce a temporal difference version of contrastive predictive coding that stitches together pieces of different time series data to decrease the amount of data required to learn predictions of future events.
We apply this representation learning method to derive an off-policy algorithm for goal-conditioned RL.
arXiv Detail & Related papers (2023-10-31T03:16:32Z) - OpenSTL: A Comprehensive Benchmark of Spatio-Temporal Predictive
Learning [67.07363529640784]
We propose OpenSTL to categorize prevalent approaches into recurrent-based and recurrent-free models.
We conduct standard evaluations on datasets across various domains, including synthetic moving object trajectory, human motion, driving scenes, traffic flow and forecasting weather.
We find that recurrent-free models achieve a good balance between efficiency and performance than recurrent models.
arXiv Detail & Related papers (2023-06-20T03:02:14Z) - Rethinking Missing Data: Aleatoric Uncertainty-Aware Recommendation [59.500347564280204]
We propose a new Aleatoric Uncertainty-aware Recommendation (AUR) framework.
AUR consists of a new uncertainty estimator along with a normal recommender model.
As the chance of mislabeling reflects the potential of a pair, AUR makes recommendations according to the uncertainty.
arXiv Detail & Related papers (2022-09-22T04:32:51Z) - VQ-AR: Vector Quantized Autoregressive Probabilistic Time Series
Forecasting [10.605719154114354]
Time series models aim for accurate predictions of the future given the past, where the forecasts are used for important downstream tasks like business decision making.
In this paper, we introduce a novel autoregressive architecture, VQ-AR, which instead learns a emphdiscrete set of representations that are used to predict the future.
arXiv Detail & Related papers (2022-05-31T15:43:46Z) - Perceptual Score: What Data Modalities Does Your Model Perceive? [73.75255606437808]
We introduce the perceptual score, a metric that assesses the degree to which a model relies on the different subsets of the input features.
We find that recent, more accurate multi-modal models for visual question-answering tend to perceive the visual data less than their predecessors.
Using the perceptual score also helps to analyze model biases by decomposing the score into data subset contributions.
arXiv Detail & Related papers (2021-10-27T12:19:56Z) - Learning Dual Dynamic Representations on Time-Sliced User-Item
Interaction Graphs for Sequential Recommendation [62.30552176649873]
We devise a novel Dynamic Representation Learning model for Sequential Recommendation (DRL-SRe)
To better model the user-item interactions for characterizing the dynamics from both sides, the proposed model builds a global user-item interaction graph for each time slice.
To enable the model to capture fine-grained temporal information, we propose an auxiliary temporal prediction task over consecutive time slices.
arXiv Detail & Related papers (2021-09-24T07:44:27Z) - MEANTIME: Mixture of Attention Mechanisms with Multi-temporal Embeddings
for Sequential Recommendation [12.386304516106854]
Self-attention based models have achieved state-of-the-art performance in sequential recommendation task.
These models rely on a simple positional embedding to exploit the sequential nature of the user's history.
We propose MEANTIME which employs multiple types of temporal embeddings designed to capture various patterns from the user's behavior sequence.
arXiv Detail & Related papers (2020-08-19T05:32:14Z) - Ambiguity in Sequential Data: Predicting Uncertain Futures with
Recurrent Models [110.82452096672182]
We propose an extension of the Multiple Hypothesis Prediction (MHP) model to handle ambiguous predictions with sequential data.
We also introduce a novel metric for ambiguous problems, which is better suited to account for uncertainties.
arXiv Detail & Related papers (2020-03-10T09:15:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.