A Simple and Scalable Tensor Completion Algorithm via Latent Invariant
Constraint for Recommendation System
- URL: http://arxiv.org/abs/2206.13355v1
- Date: Mon, 27 Jun 2022 15:03:18 GMT
- Title: A Simple and Scalable Tensor Completion Algorithm via Latent Invariant
Constraint for Recommendation System
- Authors: Tung Nguyen, Sang T. Truong, and Jeffrey Uhlmann
- Abstract summary: We study a novel method to efficiently and accurately learn parameters of a model for the unobservable personal preferences that underly user ratings.
By regularizing the tensor decomposition with a single latent invariant, we achieve three properties for a reliable recommender system.
- Score: 6.125831939376033
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper we provide a latent-variable formulation and solution to the
recommender system (RS) problem in terms of a fundamental property that any
reasonable solution should be expected to satisfy. Specifically, we examine a
novel tensor completion method to efficiently and accurately learn parameters
of a model for the unobservable personal preferences that underly user ratings.
By regularizing the tensor decomposition with a single latent invariant, we
achieve three properties for a reliable recommender system: (1) uniqueness of
the tensor completion result with minimal assumptions, (2) unit consistency
that is independent of arbitrary preferences of users, and (3) a consensus
ordering guarantee that provides consistent ranking between observed and
unobserved rating scores. Our algorithm leads to a simple and elegant
recommendation framework that has linear computational complexity and with no
hyperparameter tuning. We provide empirical results demonstrating that the
approach significantly outperforms current state-of-the-art methods.
Related papers
- Efficient Fairness-Performance Pareto Front Computation [51.558848491038916]
We show that optimal fair representations possess several useful structural properties.
We then show that these approxing problems can be solved efficiently via concave programming methods.
arXiv Detail & Related papers (2024-09-26T08:46:48Z) - Optimal Baseline Corrections for Off-Policy Contextual Bandits [61.740094604552475]
We aim to learn decision policies that optimize an unbiased offline estimate of an online reward metric.
We propose a single framework built on their equivalence in learning scenarios.
Our framework enables us to characterize the variance-optimal unbiased estimator and provide a closed-form solution for it.
arXiv Detail & Related papers (2024-05-09T12:52:22Z) - An Inexact Halpern Iteration with Application to Distributionally Robust
Optimization [9.529117276663431]
We investigate the inexact variants of the scheme in both deterministic and deterministic convergence settings.
We show that by choosing the inexactness appropriately, the inexact schemes admit an $O(k-1) convergence rate in terms of the (expected) residue norm.
arXiv Detail & Related papers (2024-02-08T20:12:47Z) - Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - An Admissible Shift-Consistent Method for Recommender Systems [4.706921336764783]
We propose a new constraint, called shift-consistency, for solving matrix/tensor completion problems in recommender systems.
Our method provably guarantees several key mathematical properties.
We argue that our analysis suggests a structured means for defining latent-space projections.
arXiv Detail & Related papers (2023-07-17T21:32:51Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Accelerating Certifiable Estimation with Preconditioned Eigensolvers [2.1701691499017812]
A dominant cost in many state-of-the-art (Burer-Monteiro factorization-based) estimation methods is solution verification.
We show how to significantly accelerate this verification step, and thereby the overall speed of certifiable estimation methods.
We show how to address this challenge using preconditioned eigensolvers.
arXiv Detail & Related papers (2022-07-12T02:00:08Z) - Tensor Completion with Provable Consistency and Fairness Guarantees for
Recommender Systems [5.099537069575897]
We introduce a new consistency-based approach for defining and solving nonnegative/positive matrix and tensor completion problems.
We show that a single property/constraint: preserving unit-scale consistency, guarantees the existence of both a solution and, under relatively weak support assumptions, uniqueness.
arXiv Detail & Related papers (2022-04-04T19:42:46Z) - SmoothMix: Training Confidence-calibrated Smoothed Classifiers for
Certified Robustness [61.212486108346695]
We propose a training scheme, coined SmoothMix, to control the robustness of smoothed classifiers via self-mixup.
The proposed procedure effectively identifies over-confident, near off-class samples as a cause of limited robustness.
Our experimental results demonstrate that the proposed method can significantly improve the certified $ell$-robustness of smoothed classifiers.
arXiv Detail & Related papers (2021-11-17T18:20:59Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.