Supervised Quantile Normalization for Low-rank Matrix Approximation
- URL: http://arxiv.org/abs/2002.03229v2
- Date: Fri, 3 Jul 2020 18:48:33 GMT
- Title: Supervised Quantile Normalization for Low-rank Matrix Approximation
- Authors: Marco Cuturi, Olivier Teboul, Jonathan Niles-Weed, Jean-Philippe Vert
- Abstract summary: We learn the parameters of quantile normalization operators that can operate row-wise on the values of $X$ and/or of its factorization $UV$ to improve the quality of the low-rank representation of $X$ itself.
We demonstrate the applicability of these techniques on synthetic and genomics datasets.
- Score: 50.445371939523305
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low rank matrix factorization is a fundamental building block in machine
learning, used for instance to summarize gene expression profile data or
word-document counts. To be robust to outliers and differences in scale across
features, a matrix factorization step is usually preceded by ad-hoc feature
normalization steps, such as \texttt{tf-idf} scaling or data whitening. We
propose in this work to learn these normalization operators jointly with the
factorization itself. More precisely, given a $d\times n$ matrix $X$ of $d$
features measured on $n$ individuals, we propose to learn the parameters of
quantile normalization operators that can operate row-wise on the values of $X$
and/or of its factorization $UV$ to improve the quality of the low-rank
representation of $X$ itself. This optimization is facilitated by the
introduction of a new differentiable quantile normalization operator built
using optimal transport, providing new results on top of existing work by
(Cuturi et al. 2019). We demonstrate the applicability of these techniques on
synthetic and genomics datasets.
Related papers
- Unveiling Induction Heads: Provable Training Dynamics and Feature Learning in Transformers [54.20763128054692]
We study how a two-attention-layer transformer is trained to perform ICL on $n$-gram Markov chain data.
We prove that the gradient flow with respect to a cross-entropy ICL loss converges to a limiting model.
arXiv Detail & Related papers (2024-09-09T18:10:26Z) - Optimal Matrix-Mimetic Tensor Algebras via Variable Projection [0.0]
Matrix mimeticity arises from interpreting tensors as operators that can be multiplied, factorized, and analyzed analogous to matrices.
We learn optimal linear mappings and corresponding tensor representations without relying on prior knowledge of the data.
We provide original theory of uniqueness of the transformation and convergence analysis of our variable-projection-based algorithm.
arXiv Detail & Related papers (2024-06-11T04:52:23Z) - Transformers as Support Vector Machines [54.642793677472724]
We establish a formal equivalence between the optimization geometry of self-attention and a hard-margin SVM problem.
We characterize the implicit bias of 1-layer transformers optimized with gradient descent.
We believe these findings inspire the interpretation of transformers as a hierarchy of SVMs that separates and selects optimal tokens.
arXiv Detail & Related papers (2023-08-31T17:57:50Z) - Private Synthetic Data for Multitask Learning and Marginal Queries [30.123686707904543]
A key innovation in our algorithm is the ability to directly handle numerical features.
Eliminating the need for binning allows us to produce synthetic data preserving large numbers of statistical queries.
Our method consistently runs 2-5x faster than the best comparable techniques.
arXiv Detail & Related papers (2022-09-15T16:00:44Z) - Majorization-minimization for Sparse Nonnegative Matrix Factorization
with the $\beta$-divergence [2.3787352248749376]
It is well known that the norm of the other factor (the dictionary matrix) needs to be controlled in order to avoid an ill-posed formulation.
Standard practice consists in constraining the columns of the dictionary to have unit norm, which leads to a nontrivial optimization problem.
We derive block-descent majorization-minimization algorithms that result in simple multiplicative updates for either $ell_1$-regularization or the more "aggressive" log-regularization.
arXiv Detail & Related papers (2022-07-13T16:09:29Z) - An Online Riemannian PCA for Stochastic Canonical Correlation Analysis [37.8212762083567]
We present an efficient algorithm (RSG+) for canonical correlation analysis (CCA) using a reparametrization of the projection matrices.
While the paper primarily focuses on the formulation and technical analysis of its properties, our experiments show that the empirical behavior on common datasets is quite promising.
arXiv Detail & Related papers (2021-06-08T23:38:29Z) - Solving weakly supervised regression problem using low-rank manifold
regularization [77.34726150561087]
We solve a weakly supervised regression problem.
Under "weakly" we understand that for some training points the labels are known, for some unknown, and for others uncertain due to the presence of random noise or other reasons such as lack of resources.
In the numerical section, we applied the suggested method to artificial and real datasets using Monte-Carlo modeling.
arXiv Detail & Related papers (2021-04-13T23:21:01Z) - Sparse PCA via $l_{2,p}$-Norm Regularization for Unsupervised Feature
Selection [138.97647716793333]
We propose a simple and efficient unsupervised feature selection method, by combining reconstruction error with $l_2,p$-norm regularization.
We present an efficient optimization algorithm to solve the proposed unsupervised model, and analyse the convergence and computational complexity of the algorithm theoretically.
arXiv Detail & Related papers (2020-12-29T04:08:38Z) - On Using Hamiltonian Monte Carlo Sampling for Reinforcement Learning
Problems in High-dimension [7.200655637873445]
Hamiltonian Monte Carlo (HMC) sampling offers a tractable way to generate data for training RL algorithms.
We introduce a framework, called textitHamiltonian $Q$-Learning, that demonstrates, both theoretically and empirically, that $Q$ values can be learned from a dataset generated by HMC samples of actions, rewards, and state transitions.
arXiv Detail & Related papers (2020-11-11T17:35:25Z) - Multi-Objective Matrix Normalization for Fine-grained Visual Recognition [153.49014114484424]
Bilinear pooling achieves great success in fine-grained visual recognition (FGVC)
Recent methods have shown that the matrix power normalization can stabilize the second-order information in bilinear features.
We propose an efficient Multi-Objective Matrix Normalization (MOMN) method that can simultaneously normalize a bilinear representation.
arXiv Detail & Related papers (2020-03-30T08:40:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.