Dynamic Collaborative Filtering for Matrix- and Tensor-based Recommender
Systems
- URL: http://arxiv.org/abs/2312.10064v1
- Date: Mon, 4 Dec 2023 20:45:51 GMT
- Title: Dynamic Collaborative Filtering for Matrix- and Tensor-based Recommender
Systems
- Authors: Albert Saiapin, Ivan Oseledets, Evgeny Frolov
- Abstract summary: We introduce a novel collaborative filtering model for sequential problems known as TIRecA.
TIRecA efficiently updates its parameters using only the new data segment, allowing incremental addition of new users and items to the recommender system.
Our comparison with general matrix and tensor-based baselines in terms of prediction quality and computational time reveals that TIRecA achieves comparable quality to the baseline methods, while being 10-20 times faster in training time.
- Score: 5.1148288291550505
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In production applications of recommender systems, a continuous data flow is
employed to update models in real-time. Many recommender models often require
complete retraining to adapt to new data. In this work, we introduce a novel
collaborative filtering model for sequential problems known as Tucker
Integrator Recommender - TIRecA. TIRecA efficiently updates its parameters
using only the new data segment, allowing incremental addition of new users and
items to the recommender system. To demonstrate the effectiveness of the
proposed model, we conducted experiments on four publicly available datasets:
MovieLens 20M, Amazon Beauty, Amazon Toys and Games, and Steam. Our comparison
with general matrix and tensor-based baselines in terms of prediction quality
and computational time reveals that TIRecA achieves comparable quality to the
baseline methods, while being 10-20 times faster in training time.
Related papers
- TTT4Rec: A Test-Time Training Approach for Rapid Adaption in Sequential Recommendation [11.15566809055308]
Test-Time Training (TTT) offers a novel approach by using self-supervised learning during inference to dynamically update model parameters.
We propose TTT4Rec, a sequential recommendation framework that integrates TTT to better capture dynamic user behavior.
We evaluate TTT4Rec on three widely-used recommendation datasets, demonstrating that it achieves performance on par with or exceeding state-of-the-art models.
arXiv Detail & Related papers (2024-09-27T21:14:23Z) - Quick-Tune: Quickly Learning Which Pretrained Model to Finetune and How [62.467716468917224]
We propose a methodology that jointly searches for the optimal pretrained model and the hyperparameters for finetuning it.
Our method transfers knowledge about the performance of many pretrained models on a series of datasets.
We empirically demonstrate that our resulting approach can quickly select an accurate pretrained model for a new dataset.
arXiv Detail & Related papers (2023-06-06T16:15:26Z) - Federated Privacy-preserving Collaborative Filtering for On-Device Next
App Prediction [52.16923290335873]
We propose a novel SeqMF model to solve the problem of predicting the next app launch during mobile device usage.
We modify the structure of the classical matrix factorization model and update the training procedure to sequential learning.
One more ingredient of the proposed approach is a new privacy mechanism that guarantees the protection of the sent data from the users to the remote server.
arXiv Detail & Related papers (2023-02-05T10:29:57Z) - Augmented Bilinear Network for Incremental Multi-Stock Time-Series
Classification [83.23129279407271]
We propose a method to efficiently retain the knowledge available in a neural network pre-trained on a set of securities.
In our method, the prior knowledge encoded in a pre-trained neural network is maintained by keeping existing connections fixed.
This knowledge is adjusted for the new securities by a set of augmented connections, which are optimized using the new data.
arXiv Detail & Related papers (2022-07-23T18:54:10Z) - Effective and Efficient Training for Sequential Recommendation using
Recency Sampling [91.02268704681124]
We propose a novel Recency-based Sampling of Sequences training objective.
We show that the models enhanced with our method can achieve performances exceeding or very close to stateof-the-art BERT4Rec.
arXiv Detail & Related papers (2022-07-06T13:06:31Z) - On the Generalizability and Predictability of Recommender Systems [33.46314108814183]
We give the first large-scale study of recommender system approaches.
We create Reczilla, a meta-learning approach to recommender systems.
arXiv Detail & Related papers (2022-06-23T17:51:42Z) - Dynamic Graph Collaborative Filtering [64.87765663208927]
Dynamic recommendation is essential for recommender systems to provide real-time predictions based on sequential data.
Here we propose Dynamic Graph Collaborative Filtering (DGCF), a novel framework leveraging dynamic graphs to capture collaborative and sequential relations.
Our approach achieves higher performance when the dataset contains less action repetition, indicating the effectiveness of integrating dynamic collaborative information.
arXiv Detail & Related papers (2021-01-08T04:16:24Z) - It's the Best Only When It Fits You Most: Finding Related Models for
Serving Based on Dynamic Locality Sensitive Hashing [1.581913948762905]
Preparation of training data is often a bottleneck in the lifecycle of deploying a deep learning model for production or research.
This paper proposes an end-to-end process of searching related models for serving based on the similarity of the target dataset and the training datasets of the available models.
arXiv Detail & Related papers (2020-10-13T22:52:13Z) - S^3-Rec: Self-Supervised Learning for Sequential Recommendation with
Mutual Information Maximization [104.87483578308526]
We propose the model S3-Rec, which stands for Self-Supervised learning for Sequential Recommendation.
For our task, we devise four auxiliary self-supervised objectives to learn the correlations among attribute, item, subsequence, and sequence.
Extensive experiments conducted on six real-world datasets demonstrate the superiority of our proposed method over existing state-of-the-art methods.
arXiv Detail & Related papers (2020-08-18T11:44:10Z) - ADER: Adaptively Distilled Exemplar Replay Towards Continual Learning
for Session-based Recommendation [28.22402119581332]
Session-based recommendation has received growing attention recently due to the increasing privacy concern.
We propose a method called Adaptively Distilled Exemplar Replay (ADER) by periodically replaying previous training samples.
ADER consistently outperforms other baselines, and it even outperforms the method using all historical data at every update cycle.
arXiv Detail & Related papers (2020-07-23T13:19:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.