Orthogonal Nonnegative Matrix Factorization with Sparsity Constraints
- URL: http://arxiv.org/abs/2210.02672v4
- Date: Fri, 04 Apr 2025 05:59:30 GMT
- Title: Orthogonal Nonnegative Matrix Factorization with Sparsity Constraints
- Authors: Salar Basiri, Alisina Bayati, Srinivasa Salapaka,
- Abstract summary: This article presents a novel approach to solving the sparsity-constrained Orthogonal Nonnegative Matrix Factorization (SCONMF) problem.<n>By reformulating SCONMF as a capacity-constrained facility-location problem, the proposed method naturally integrates non-negativity, orthogonality, and sparsity constraints.<n>Specifically, our approach integrates control-barrier function (CBF) based framework used for dynamic optimal control design problems with maximum-entropy-principle-based framework used for facility location problems to enforce these constraints while ensuring robust factorization.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This article presents a novel approach to solving the sparsity-constrained Orthogonal Nonnegative Matrix Factorization (SCONMF) problem, which requires decomposing a non-negative data matrix into the product of two lower-rank non-negative matrices, X=WH, where the mixing matrix H has orthogonal rows HH^T=I, while also satisfying an upper bound on the number of nonzero elements in each row. By reformulating SCONMF as a capacity-constrained facility-location problem (CCFLP), the proposed method naturally integrates non-negativity, orthogonality, and sparsity constraints. Specifically, our approach integrates control-barrier function (CBF) based framework used for dynamic optimal control design problems with maximum-entropy-principle-based framework used for facility location problems to enforce these constraints while ensuring robust factorization. Additionally, this work introduces a quantitative approach for determining the ``true" rank of W or H, equivalent to the number of ``true" features - a critical aspect in ONMF applications where the number of features is unknown. Simulations on various datasets demonstrate significantly improved factorizations with low reconstruction errors (as small as by 150 times) while strictly satisfying all constraints, outperforming existing methods that struggle with balancing accuracy and constraint adherence.
Related papers
- Computational Efficient Informative Nonignorable Matrix Completion: A Row- and Column-Wise Matrix U-Statistic Pseudo-Likelihood Approach [2.2306682526405868]
We establish a unified framework to deal with the high dimensional matrix completion problem.
We derive a row- and column-wise matrix U-statistics type loss function, with the nuclear norm for regularization.
A singular value proximal gradient algorithm is developed to solve the proposed optimization problem.
arXiv Detail & Related papers (2025-04-05T01:41:53Z) - An Accelerated Alternating Partial Bregman Algorithm for ReLU-based Matrix Decomposition [0.0]
In this paper, we aim to investigate the sparse low-rank characteristics rectified on non-negative matrices.
We propose a novel regularization term incorporating useful structures in clustering and compression tasks.
We derive corresponding closed-form solutions while maintaining the $L$-smooth property always holds for any $Lge 1$.
arXiv Detail & Related papers (2025-03-04T08:20:34Z) - Trust-Region Sequential Quadratic Programming for Stochastic Optimization with Random Models [57.52124921268249]
We propose a Trust Sequential Quadratic Programming method to find both first and second-order stationary points.
To converge to first-order stationary points, our method computes a gradient step in each iteration defined by minimizing a approximation of the objective subject.
To converge to second-order stationary points, our method additionally computes an eigen step to explore the negative curvature the reduced Hessian matrix.
arXiv Detail & Related papers (2024-09-24T04:39:47Z) - Efficient algorithms for regularized Poisson Non-negative Matrix Factorization [5.947994020693636]
We consider the problem of regularized Poisson Non-negative Matrix Factorization (NMF) problem.
This problem holds significant relevance in numerous Machine Learning applications.
We build approriate majorizing function for Lipschitz and relatively smooth functions, and show how to introduce linear constraints into the problem.
arXiv Detail & Related papers (2024-04-25T10:57:50Z) - Low-Rank Prune-And-Factorize for Language Model Compression [18.088550230146247]
Matrix factorization fails to retain satisfactory performance under moderate to high compression rate.
We propose two techniques: sparsity-aware SVD and mixed-rank fine-tuning.
arXiv Detail & Related papers (2023-06-25T07:38:43Z) - An inexact LPA for DC composite optimization and application to matrix completions with outliers [5.746154410100363]
This paper concerns a class of composite optimization problems.
By leveraging the composite structure, we provide a condition for the potential function to have the KL property of $1/2$ at the iterate sequence.
arXiv Detail & Related papers (2023-03-29T16:15:34Z) - Symmetric Tensor Networks for Generative Modeling and Constrained
Combinatorial Optimization [72.41480594026815]
Constrained optimization problems abound in industry, from portfolio optimization to logistics.
One of the major roadblocks in solving these problems is the presence of non-trivial hard constraints which limit the valid search space.
In this work, we encode arbitrary integer-valued equality constraints of the form Ax=b, directly into U(1) symmetric networks (TNs) and leverage their applicability as quantum-inspired generative models.
arXiv Detail & Related papers (2022-11-16T18:59:54Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - Semi-Supervised Subspace Clustering via Tensor Low-Rank Representation [64.49871502193477]
We propose a novel semi-supervised subspace clustering method, which is able to simultaneously augment the initial supervisory information and construct a discriminative affinity matrix.
Comprehensive experimental results on six commonly-used benchmark datasets demonstrate the superiority of our method over state-of-the-art methods.
arXiv Detail & Related papers (2022-05-21T01:47:17Z) - Log-based Sparse Nonnegative Matrix Factorization for Data
Representation [55.72494900138061]
Nonnegative matrix factorization (NMF) has been widely studied in recent years due to its effectiveness in representing nonnegative data with parts-based representations.
We propose a new NMF method with log-norm imposed on the factor matrices to enhance the sparseness.
A novel column-wisely sparse norm, named $ell_2,log$-(pseudo) norm, is proposed to enhance the robustness of the proposed method.
arXiv Detail & Related papers (2022-04-22T11:38:10Z) - Learning a Compressive Sensing Matrix with Structural Constraints via
Maximum Mean Discrepancy Optimization [17.104994036477308]
We introduce a learning-based algorithm to obtain a measurement matrix for compressive sensing related recovery problems.
Recent success of such metrics in neural network related topics motivate a solution of the problem based on machine learning.
arXiv Detail & Related papers (2021-10-14T08:35:54Z) - Solving weakly supervised regression problem using low-rank manifold
regularization [77.34726150561087]
We solve a weakly supervised regression problem.
Under "weakly" we understand that for some training points the labels are known, for some unknown, and for others uncertain due to the presence of random noise or other reasons such as lack of resources.
In the numerical section, we applied the suggested method to artificial and real datasets using Monte-Carlo modeling.
arXiv Detail & Related papers (2021-04-13T23:21:01Z) - Adversarially-Trained Nonnegative Matrix Factorization [77.34726150561087]
We consider an adversarially-trained version of the nonnegative matrix factorization.
In our formulation, an attacker adds an arbitrary matrix of bounded norm to the given data matrix.
We design efficient algorithms inspired by adversarial training to optimize for dictionary and coefficient matrices.
arXiv Detail & Related papers (2021-04-10T13:13:17Z) - Column $\ell_{2,0}$-norm regularized factorization model of low-rank
matrix recovery and its computation [0.9281671380673306]
This paper is concerned with the column $ell_2,0$regularized factorization model of low-rank computation problems.
Numerical experiments are conducted with synthetic and real data examples.
arXiv Detail & Related papers (2020-08-24T14:15:36Z) - Robust Low-rank Matrix Completion via an Alternating Manifold Proximal
Gradient Continuation Method [47.80060761046752]
Robust low-rank matrix completion (RMC) has been studied extensively for computer vision, signal processing and machine learning applications.
This problem aims to decompose a partially observed matrix into the superposition of a low-rank matrix and a sparse matrix, where the sparse matrix captures the grossly corrupted entries of the matrix.
A widely used approach to tackle RMC is to consider a convex formulation, which minimizes the nuclear norm of the low-rank matrix (to promote low-rankness) and the l1 norm of the sparse matrix (to promote sparsity).
In this paper, motivated by some recent works on low-
arXiv Detail & Related papers (2020-08-18T04:46:22Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z) - A Block Coordinate Descent-based Projected Gradient Algorithm for
Orthogonal Non-negative Matrix Factorization [0.0]
This article utilizes the projected gradient method (PG) for a non-negative matrix factorization problem (NMF)
We penalise the orthonormality constraints and apply the PG method via a block coordinate descent approach.
arXiv Detail & Related papers (2020-03-23T13:24:43Z) - Convergence to Second-Order Stationarity for Non-negative Matrix
Factorization: Provably and Concurrently [18.89597524771988]
Non-negative matrix factorization (NMF) is a fundamental non-modification optimization problem with numerous applications in Machine Learning.
This paper defines a multiplicative weight update type dynamics (Seung algorithm) that runs concurrently and provably avoids saddle points.
An important advantage is the use concurrent implementations in parallel computing environments.
arXiv Detail & Related papers (2020-02-26T06:40:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.