Tensor Completion with Provable Consistency and Fairness Guarantees for
Recommender Systems
- URL: http://arxiv.org/abs/2204.01815v6
- Date: Tue, 17 Oct 2023 17:06:34 GMT
- Title: Tensor Completion with Provable Consistency and Fairness Guarantees for
Recommender Systems
- Authors: Tung Nguyen and Jeffrey Uhlmann
- Abstract summary: We introduce a new consistency-based approach for defining and solving nonnegative/positive matrix and tensor completion problems.
We show that a single property/constraint: preserving unit-scale consistency, guarantees the existence of both a solution and, under relatively weak support assumptions, uniqueness.
- Score: 5.099537069575897
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a new consistency-based approach for defining and solving
nonnegative/positive matrix and tensor completion problems. The novelty of the
framework is that instead of artificially making the problem well-posed in the
form of an application-arbitrary optimization problem, e.g., minimizing a bulk
structural measure such as rank or norm, we show that a single
property/constraint: preserving unit-scale consistency, guarantees the
existence of both a solution and, under relatively weak support assumptions,
uniqueness. The framework and solution algorithms also generalize directly to
tensors of arbitrary dimensions while maintaining computational complexity that
is linear in problem size for fixed dimension d. In the context of recommender
system (RS) applications, we prove that two reasonable properties that should
be expected to hold for any solution to the RS problem are sufficient to permit
uniqueness guarantees to be established within our framework. This is
remarkable because it obviates the need for heuristic-based statistical or AI
methods despite what appear to be distinctly human/subjective variables at the
heart of the problem. Key theoretical contributions include a general
unit-consistent tensor-completion framework with proofs of its properties,
e.g., consensus-order and fairness, and algorithms with optimal runtime and
space complexities, e.g., O(1) term-completion with preprocessing complexity
that is linear in the number of known terms of the matrix/tensor. From a
practical perspective, the seamless ability of the framework to generalize to
exploit high-dimensional structural relationships among key state variables,
e.g., user and product attributes, offers a means for extracting significantly
more information than is possible for alternative methods that cannot
generalize beyond direct user-product relationships.
Related papers
- Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - Generalization Guarantees via Algorithm-dependent Rademacher Complexity [33.408679482592426]
We propose a measure to control generalization error, which is the empirical Rademacher complexity of an algorithm- and data-dependent hypothesis class.
We obtain novel bounds based on the finite fractal dimension, which (a) extend previous fractal-type bounds from continuous to finite hypothesis classes, and (b) avoid a mutual information term.
arXiv Detail & Related papers (2023-07-04T18:37:38Z) - A Simple and Scalable Tensor Completion Algorithm via Latent Invariant
Constraint for Recommendation System [6.125831939376033]
We study a novel method to efficiently and accurately learn parameters of a model for the unobservable personal preferences that underly user ratings.
By regularizing the tensor decomposition with a single latent invariant, we achieve three properties for a reliable recommender system.
arXiv Detail & Related papers (2022-06-27T15:03:18Z) - Instance-Dependent Confidence and Early Stopping for Reinforcement
Learning [99.57168572237421]
Various algorithms for reinforcement learning (RL) exhibit dramatic variation in their convergence rates as a function of problem structure.
This research provides guarantees that explain textitex post the performance differences observed.
A natural next step is to convert these theoretical guarantees into guidelines that are useful in practice.
arXiv Detail & Related papers (2022-01-21T04:25:35Z) - Faster Algorithm and Sharper Analysis for Constrained Markov Decision
Process [56.55075925645864]
The problem of constrained decision process (CMDP) is investigated, where an agent aims to maximize the expected accumulated discounted reward subject to multiple constraints.
A new utilities-dual convex approach is proposed with novel integration of three ingredients: regularized policy, dual regularizer, and Nesterov's gradient descent dual.
This is the first demonstration that nonconcave CMDP problems can attain the lower bound of $mathcal O (1/epsilon)$ for all complexity optimization subject to convex constraints.
arXiv Detail & Related papers (2021-10-20T02:57:21Z) - Jointly Modeling and Clustering Tensors in High Dimensions [6.072664839782975]
We consider the problem of jointly benchmarking and clustering of tensors.
We propose an efficient high-maximization algorithm that converges geometrically to a neighborhood that is within statistical precision.
arXiv Detail & Related papers (2021-04-15T21:06:16Z) - Complementary Composite Minimization, Small Gradients in General Norms,
and Applications to Regression Problems [14.759688428864157]
Composite minimization is a powerful framework in large-scale convex optimization.
We introduce a new algorithmic framework for complementary composite minimization.
We prove that the algorithms resulting from our framework are near-optimal in most of the standard optimization settings.
arXiv Detail & Related papers (2021-01-26T19:21:28Z) - Efficient Methods for Structured Nonconvex-Nonconcave Min-Max
Optimization [98.0595480384208]
We propose a generalization extraient spaces which converges to a stationary point.
The algorithm applies not only to general $p$-normed spaces, but also to general $p$-dimensional vector spaces.
arXiv Detail & Related papers (2020-10-31T21:35:42Z) - Simplified Swarm Optimization for Bi-Objection Active Reliability
Redundancy Allocation Problems [1.5990720051907859]
The reliability redundancy allocation problem (RRAP) is a well-known problem in system design, development, and management.
In this study, a bi-objective RRAP is formulated by changing the cost constraint as a new goal.
To solve the proposed problem, a new simplified swarm optimization (SSO) with a penalty function, a real one-type solution structure, a number-based self-adaptive new update mechanism, a constrained non-dominated solution selection, and a new pBest replacement policy is developed.
arXiv Detail & Related papers (2020-06-17T13:15:44Z) - On dissipative symplectic integration with applications to
gradient-based optimization [77.34726150561087]
We propose a geometric framework in which discretizations can be realized systematically.
We show that a generalization of symplectic to nonconservative and in particular dissipative Hamiltonian systems is able to preserve rates of convergence up to a controlled error.
arXiv Detail & Related papers (2020-04-15T00:36:49Z) - Polynomial-Time Exact MAP Inference on Discrete Models with Global
Dependencies [83.05591911173332]
junction tree algorithm is the most general solution for exact MAP inference with run-time guarantees.
We propose a new graph transformation technique via node cloning which ensures a run-time for solving our target problem independently of the form of a corresponding clique tree.
arXiv Detail & Related papers (2019-12-27T13:30:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.