Fair Recommendation by Geometric Interpretation and Analysis of Matrix
Factorization
- URL: http://arxiv.org/abs/2301.03791v1
- Date: Tue, 10 Jan 2023 05:30:46 GMT
- Title: Fair Recommendation by Geometric Interpretation and Analysis of Matrix
Factorization
- Authors: Hao Wang
- Abstract summary: Matrix factorization-based recommender system is in effect an angle preserving dimensionality reduction technique.
We reformulate the original angle preserving dimensionality reduction problem into a distance preserving dimensionality reduction problem.
We show that the geometric shape of input data of recommender system in its original higher dimension are distributed on co-centric circles with interesting properties.
- Score: 4.658166900129066
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Matrix factorization-based recommender system is in effect an angle
preserving dimensionality reduction technique. Since the frequency of items
follows power-law distribution, most vectors in the original dimension of user
feature vectors and item feature vectors lie on the same hyperplane. However,
it is very difficult to reconstruct the embeddings in the original dimension
analytically, so we reformulate the original angle preserving dimensionality
reduction problem into a distance preserving dimensionality reduction problem.
We show that the geometric shape of input data of recommender system in its
original higher dimension are distributed on co-centric circles with
interesting properties, and design a paraboloid-based matrix factorization
named ParaMat to solve the recommendation problem. In the experiment section,
we compare our algorithm with 8 other algorithms and prove our new method is
the most fair algorithm compared with modern day recommender systems such as
ZeroMat and DotMat Hybrid.
Related papers
- Riemannian Optimization for Non-convex Euclidean Distance Geometry with Global Recovery Guarantees [6.422262171968397]
Two algorithms are proposed to solve the Euclidean Distance Geometry problem.
First algorithm converges linearly to the true solution.
Second algorithm demonstrates strong numerical performance on both synthetic and real data.
arXiv Detail & Related papers (2024-10-08T21:19:22Z) - PMaF: Deep Declarative Layers for Principal Matrix Features [37.662505982849844]
We explore two differentiable deep declarative layers, namely least squares on sphere (LESS) and implicit eigen decomposition (IED)
LESS can be used to represent data features with a low-dimensional vector containing dominant information from a high-dimensional matrix.
IED can be used to represent data features with a low-dimensional vector containing dominant information from a high-dimensional matrix.
arXiv Detail & Related papers (2023-06-26T15:13:36Z) - Sufficient dimension reduction for feature matrices [3.04585143845864]
We propose a method called principal support matrix machine (PSMM) for the matrix sufficient dimension reduction.
Our numerical analysis demonstrates that the PSMM outperforms existing methods and has strong interpretability in real data applications.
arXiv Detail & Related papers (2023-03-07T23:16:46Z) - Sparse Quadratic Optimisation over the Stiefel Manifold with Application
to Permutation Synchronisation [71.27989298860481]
We address the non- optimisation problem of finding a matrix on the Stiefel manifold that maximises a quadratic objective function.
We propose a simple yet effective sparsity-promoting algorithm for finding the dominant eigenspace matrix.
arXiv Detail & Related papers (2021-09-30T19:17:35Z) - Sparse PCA via $l_{2,p}$-Norm Regularization for Unsupervised Feature
Selection [138.97647716793333]
We propose a simple and efficient unsupervised feature selection method, by combining reconstruction error with $l_2,p$-norm regularization.
We present an efficient optimization algorithm to solve the proposed unsupervised model, and analyse the convergence and computational complexity of the algorithm theoretically.
arXiv Detail & Related papers (2020-12-29T04:08:38Z) - Robust Low-rank Matrix Completion via an Alternating Manifold Proximal
Gradient Continuation Method [47.80060761046752]
Robust low-rank matrix completion (RMC) has been studied extensively for computer vision, signal processing and machine learning applications.
This problem aims to decompose a partially observed matrix into the superposition of a low-rank matrix and a sparse matrix, where the sparse matrix captures the grossly corrupted entries of the matrix.
A widely used approach to tackle RMC is to consider a convex formulation, which minimizes the nuclear norm of the low-rank matrix (to promote low-rankness) and the l1 norm of the sparse matrix (to promote sparsity).
In this paper, motivated by some recent works on low-
arXiv Detail & Related papers (2020-08-18T04:46:22Z) - Effective Dimension Adaptive Sketching Methods for Faster Regularized
Least-Squares Optimization [56.05635751529922]
We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching.
We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT)
arXiv Detail & Related papers (2020-06-10T15:00:09Z) - A Block Coordinate Descent-based Projected Gradient Algorithm for
Orthogonal Non-negative Matrix Factorization [0.0]
This article utilizes the projected gradient method (PG) for a non-negative matrix factorization problem (NMF)
We penalise the orthonormality constraints and apply the PG method via a block coordinate descent approach.
arXiv Detail & Related papers (2020-03-23T13:24:43Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z) - Optimal Iterative Sketching with the Subsampled Randomized Hadamard
Transform [64.90148466525754]
We study the performance of iterative sketching for least-squares problems.
We show that the convergence rate for Haar and randomized Hadamard matrices are identical, andally improve upon random projections.
These techniques may be applied to other algorithms that employ randomized dimension reduction.
arXiv Detail & Related papers (2020-02-03T16:17:50Z) - Tangent-space methods for truncating uniform MPS [0.0]
A central primitive in quantum tensor network simulations is the problem of approximating a matrix product state with one of a lower bond dimension.
We formulate a tangent-space based variational algorithm to achieve this for uniform (infinite) matrix product states.
arXiv Detail & Related papers (2020-01-31T14:54:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.