Stochastic gradient descent estimation of generalized matrix factorization models with application to single-cell RNA sequencing data
- URL: http://arxiv.org/abs/2412.20509v1
- Date: Sun, 29 Dec 2024 16:02:15 GMT
- Title: Stochastic gradient descent estimation of generalized matrix factorization models with application to single-cell RNA sequencing data
- Authors: Cristian Castiglione, Alexandre Segers, Lieven Clement, Davide Risso,
- Abstract summary: Single-cell RNA sequencing allows the quantitation of gene expression at the individual cell level.
Dimensionality reduction is a common preprocessing step to simplify the visualization, clustering, and phenotypic characterization of samples.
We present a generalized matrix factorization model assuming a general exponential dispersion family distribution.
We propose a scalable adaptive descent algorithm that allows us to estimate the model efficiently.
- Score: 41.94295877935867
- License:
- Abstract: Single-cell RNA sequencing allows the quantitation of gene expression at the individual cell level, enabling the study of cellular heterogeneity and gene expression dynamics. Dimensionality reduction is a common preprocessing step to simplify the visualization, clustering, and phenotypic characterization of samples. This step, often performed using principal component analysis or closely related methods, is challenging because of the size and complexity of the data. In this work, we present a generalized matrix factorization model assuming a general exponential dispersion family distribution and we show that many of the proposed approaches in the single-cell dimensionality reduction literature can be seen as special cases of this model. Furthermore, we propose a scalable adaptive stochastic gradient descent algorithm that allows us to estimate the model efficiently, enabling the analysis of millions of cells. Our contribution extends to introducing a novel warm start initialization method, designed to accelerate algorithm convergence and increase the precision of final estimates. Moreover, we discuss strategies for dealing with missing values and model selection. We benchmark the proposed algorithm through extensive numerical experiments against state-of-the-art methods and showcase its use in real-world biological applications. The proposed method systematically outperforms existing methods of both generalized and non-negative matrix factorization, demonstrating faster execution times while maintaining, or even enhancing, matrix reconstruction fidelity and accuracy in biological signal extraction. Finally, all the methods discussed here are implemented in an efficient open-source R package, sgdGMF, available at github/CristianCastiglione/sgdGMF
Related papers
- Scaling and renormalization in high-dimensional regression [72.59731158970894]
This paper presents a succinct derivation of the training and generalization performance of a variety of high-dimensional ridge regression models.
We provide an introduction and review of recent results on these topics, aimed at readers with backgrounds in physics and deep learning.
arXiv Detail & Related papers (2024-05-01T15:59:00Z) - Probabilistic Unrolling: Scalable, Inverse-Free Maximum Likelihood
Estimation for Latent Gaussian Models [69.22568644711113]
We introduce probabilistic unrolling, a method that combines Monte Carlo sampling with iterative linear solvers to circumvent matrix inversions.
Our theoretical analyses reveal that unrolling and backpropagation through the iterations of the solver can accelerate gradient estimation for maximum likelihood estimation.
In experiments on simulated and real data, we demonstrate that probabilistic unrolling learns latent Gaussian models up to an order of magnitude faster than gradient EM, with minimal losses in model performance.
arXiv Detail & Related papers (2023-06-05T21:08:34Z) - Neural incomplete factorization: learning preconditioners for the conjugate gradient method [2.899792823251184]
We develop a data-driven approach to accelerate the generation of effective preconditioners.
We replace the typically hand-engineered preconditioners by the output of graph neural networks.
Our method generates an incomplete factorization of the matrix and is, therefore, referred to as neural incomplete factorization (NeuralIF)
arXiv Detail & Related papers (2023-05-25T11:45:46Z) - Learning Causal Representations of Single Cells via Sparse Mechanism
Shift Modeling [3.2435888122704037]
We propose a deep generative model of single-cell gene expression data for which each perturbation is treated as an intervention targeting an unknown, but sparse, subset of latent variables.
We benchmark these methods on simulated single-cell data to evaluate their performance at latent units recovery, causal target identification and out-of-domain generalization.
arXiv Detail & Related papers (2022-11-07T15:47:40Z) - Graph Polynomial Convolution Models for Node Classification of
Non-Homophilous Graphs [52.52570805621925]
We investigate efficient learning from higher-order graph convolution and learning directly from adjacency matrix for node classification.
We show that the resulting model lead to new graphs and residual scaling parameter.
We demonstrate that the proposed methods obtain improved accuracy for node-classification of non-homophilous parameters.
arXiv Detail & Related papers (2022-09-12T04:46:55Z) - Adaptive Latent Factor Analysis via Generalized Momentum-Incorporated
Particle Swarm Optimization [6.2303427193075755]
A gradient descent (SGD) algorithm is an effective learning strategy to build a latent factor analysis (LFA) model on a high-dimensional and incomplete (HDI) matrix.
A particle swarm optimization (PSO) algorithm is commonly adopted to make an SGD-based LFA model's hyper- parameters, i.e., learning rate and regularization coefficient, self-adaptation.
This paper incorporates more historical information into each particle's evolutionary process for avoiding premature convergence.
arXiv Detail & Related papers (2022-08-04T03:15:07Z) - Deviance Matrix Factorization [6.509665408765348]
We investigate a general matrix factorization for deviance-based data losses, extending the ubiquitous singular value decomposition beyond squared error loss.
Our method leverages classical statistical methodology from generalized linear models (GLMs) and provides an efficient algorithm that is flexible enough to allow for structural zeros via entry weights.
arXiv Detail & Related papers (2021-10-12T01:27:55Z) - Generalized Matrix Factorization: efficient algorithms for fitting
generalized linear latent variable models to large data arrays [62.997667081978825]
Generalized Linear Latent Variable models (GLLVMs) generalize such factor models to non-Gaussian responses.
Current algorithms for estimating model parameters in GLLVMs require intensive computation and do not scale to large datasets.
We propose a new approach for fitting GLLVMs to high-dimensional datasets, based on approximating the model using penalized quasi-likelihood.
arXiv Detail & Related papers (2020-10-06T04:28:19Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Estimation of sparse Gaussian graphical models with hidden clustering
structure [8.258451067861932]
We propose a model to estimate the sparse Gaussian graphical models with hidden clustering structure.
We develop a symmetric Gauss-Seidel based alternating direction method of the multipliers.
Numerical experiments on both synthetic data and real data demonstrate the good performance of our model.
arXiv Detail & Related papers (2020-04-17T08:43:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.