Weighted Tensor Decompositions for Context-aware Collaborative Filtering
- URL: http://arxiv.org/abs/2503.08393v2
- Date: Mon, 14 Apr 2025 14:55:02 GMT
- Title: Weighted Tensor Decompositions for Context-aware Collaborative Filtering
- Authors: Joey De Pauw, Bart Goethals,
- Abstract summary: We study the most successful decomposition methods that use weighted square loss and categorize them based on their tensor structure and regularization strategy.<n>We provide an overview of the properties of the different decomposition methods, such as their complexity, scalability, and modelling capacity.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Over recent years it has become well accepted that user interest is not static or immutable. There are a variety of contextual factors, such as time of day, the weather or the user's mood, that influence the current interests of the user. Modelling approaches need to take these factors into account if they want to succeed at finding the most relevant content to recommend given the situation. A popular method for context-aware recommendation is to encode context attributes as extra dimensions of the classic user-item interaction matrix, effectively turning it into a tensor, followed by applying the appropriate tensor decomposition methods to learn missing values. However, unlike with matrix factorization, where all decompositions are essentially a product of matrices, there exist many more options for decomposing tensors by combining vector, matrix and tensor products. We study the most successful decomposition methods that use weighted square loss and categorize them based on their tensor structure and regularization strategy. Additionally, we further extend the pool of methods by filling in the missing combinations. In this paper we provide an overview of the properties of the different decomposition methods, such as their complexity, scalability, and modelling capacity. These benefits are then contrasted with the performances achieved in offline experiments to gain more insight into which method to choose depending on a specific situation and constraints.
Related papers
- Recommendations from Sparse Comparison Data: Provably Fast Convergence for Nonconvex Matrix Factorization [12.006706388840934]
This paper provides a theoretical analysis of a new learning problem for recommender systems where users provide feedback by comparing pairs of items instead of rating them individually.
arXiv Detail & Related papers (2025-02-27T12:17:34Z) - A Geometric Approach to Personalized Recommendation with Set-Theoretic Constraints Using Box Embeddings [43.609405236093025]
We formulate the problem of personalized item recommendation as matrix completion where rows are set-theoretically dependent.<n>Box embeddings can intuitively be understood as trainable Venn diagrams.<n>We empirically demonstrate the superiority of box embeddings over vector-based neural methods on both simple and complex item recommendation queries by up to 30 % overall.
arXiv Detail & Related papers (2025-02-15T18:18:00Z) - Learning Symmetries via Weight-Sharing with Doubly Stochastic Tensors [46.59269589647962]
Group equivariance has emerged as a valuable inductive bias in deep learning.<n>Group equivariant methods require the groups of interest to be known beforehand.<n>We show that when the dataset exhibits strong symmetries, the permutation matrices will converge to regular group representations.
arXiv Detail & Related papers (2024-12-05T20:15:34Z) - Sparse Tensor PCA via Tensor Decomposition for Unsupervised Feature Selection [7.887782360541216]
We introduce Decomposition (TD) techniques into unsupervised feature selection (UFS)
We use the orientation-dependent tensor-tensor product from sparse Singular Value Decomposition to solve the problem.
The proposed tensor PCA model can constrain sparsity at the specified mode and yield sparse tensor principal components.
arXiv Detail & Related papers (2024-07-24T04:04:56Z) - CorDA: Context-Oriented Decomposition Adaptation of Large Language Models for Task-Aware Parameter-Efficient Fine-tuning [101.81127587760831]
Current fine-tuning methods build adapters widely of the context of downstream task to learn, or the context of important knowledge to maintain.<n>We propose CorDA, a Context-oriented Decomposition Adaptation method that builds learnable task-aware adapters.<n>Our method enables two options, the knowledge-preserved adaptation and the instruction-previewed adaptation.
arXiv Detail & Related papers (2024-06-07T19:10:35Z) - Obtaining Explainable Classification Models using Distributionally
Robust Optimization [12.511155426574563]
We study generalized linear models constructed using sets of feature value rules.
An inherent trade-off exists between rule set sparsity and its prediction accuracy.
We propose a new formulation to learn an ensemble of rule sets that simultaneously addresses these competing factors.
arXiv Detail & Related papers (2023-11-03T15:45:34Z) - Matrix Completion via Non-Convex Relaxation and Adaptive Correlation
Learning [90.8576971748142]
We develop a novel surrogate that can be optimized by closed-form solutions.
We exploit upperwise correlation for completion, and thus an adaptive correlation learning model.
arXiv Detail & Related papers (2022-03-04T08:50:50Z) - Sequential Recommendation via Stochastic Self-Attention [68.52192964559829]
Transformer-based approaches embed items as vectors and use dot-product self-attention to measure the relationship between items.
We propose a novel textbfSTOchastic textbfSelf-textbfAttention(STOSA) to overcome these issues.
We devise a novel Wasserstein Self-Attention module to characterize item-item position-wise relationships in sequences.
arXiv Detail & Related papers (2022-01-16T12:38:45Z) - Solving weakly supervised regression problem using low-rank manifold
regularization [77.34726150561087]
We solve a weakly supervised regression problem.
Under "weakly" we understand that for some training points the labels are known, for some unknown, and for others uncertain due to the presence of random noise or other reasons such as lack of resources.
In the numerical section, we applied the suggested method to artificial and real datasets using Monte-Carlo modeling.
arXiv Detail & Related papers (2021-04-13T23:21:01Z) - Adversarially-Trained Nonnegative Matrix Factorization [77.34726150561087]
We consider an adversarially-trained version of the nonnegative matrix factorization.
In our formulation, an attacker adds an arbitrary matrix of bounded norm to the given data matrix.
We design efficient algorithms inspired by adversarial training to optimize for dictionary and coefficient matrices.
arXiv Detail & Related papers (2021-04-10T13:13:17Z) - Adaptive Correlated Monte Carlo for Contextual Categorical Sequence
Generation [77.7420231319632]
We adapt contextual generation of categorical sequences to a policy gradient estimator, which evaluates a set of correlated Monte Carlo (MC) rollouts for variance control.
We also demonstrate the use of correlated MC rollouts for binary-tree softmax models, which reduce the high generation cost in large vocabulary scenarios.
arXiv Detail & Related papers (2019-12-31T03:01:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.