Matrix-wise $\ell_0$-constrained Sparse Nonnegative Least Squares
- URL: http://arxiv.org/abs/2011.11066v4
- Date: Wed, 22 Jun 2022 19:54:54 GMT
- Title: Matrix-wise $\ell_0$-constrained Sparse Nonnegative Least Squares
- Authors: Nicolas Nadisic, Jeremy E Cohen, Arnaud Vandaele, Nicolas Gillis
- Abstract summary: Nonnegative least squares problems with multiple right-hand sides (MNNLS) arise in models that rely on additive linear combinations.
We introduce a novel formulation for sparse MNNLS, with a matrix-wise sparsity constraint.
We show that our proposed two-step approach provides more accurate results than state-of-the-art sparse codings applied both column-wise and globally.
- Score: 22.679160149512377
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Nonnegative least squares problems with multiple right-hand sides (MNNLS)
arise in models that rely on additive linear combinations. In particular, they
are at the core of most nonnegative matrix factorization algorithms and have
many applications. The nonnegativity constraint is known to naturally favor
sparsity, that is, solutions with few non-zero entries. However, it is often
useful to further enhance this sparsity, as it improves the interpretability of
the results and helps reducing noise, which leads to the sparse MNNLS problem.
In this paper, as opposed to most previous works that enforce sparsity column-
or row-wise, we first introduce a novel formulation for sparse MNNLS, with a
matrix-wise sparsity constraint. Then, we present a two-step algorithm to
tackle this problem. The first step divides sparse MNNLS in subproblems, one
per column of the original problem. It then uses different algorithms to
produce, either exactly or approximately, a Pareto front for each subproblem,
that is, to produce a set of solutions representing different tradeoffs between
reconstruction error and sparsity. The second step selects solutions among
these Pareto fronts in order to build a sparsity-constrained matrix that
minimizes the reconstruction error. We perform experiments on facial and
hyperspectral images, and we show that our proposed two-step approach provides
more accurate results than state-of-the-art sparse coding heuristics applied
both column-wise and globally.
Related papers
- Symmetric Tensor Networks for Generative Modeling and Constrained
Combinatorial Optimization [72.41480594026815]
Constrained optimization problems abound in industry, from portfolio optimization to logistics.
One of the major roadblocks in solving these problems is the presence of non-trivial hard constraints which limit the valid search space.
In this work, we encode arbitrary integer-valued equality constraints of the form Ax=b, directly into U(1) symmetric networks (TNs) and leverage their applicability as quantum-inspired generative models.
arXiv Detail & Related papers (2022-11-16T18:59:54Z) - A Fast Scale-Invariant Algorithm for Non-negative Least Squares with
Non-negative Data [18.81091632124107]
We show that the data itself is nonnegative as well, and we show that the nonnegativity in this case makes the problem easier.
In particular, while the oracle complexity of unconstrained least squares problems necessarily scales with one of the data matrix constants, we show that nonnegative least squares problems with nonnegative data are solvable to multiplicative error.
arXiv Detail & Related papers (2022-03-08T02:02:32Z) - Sparse Quadratic Optimisation over the Stiefel Manifold with Application
to Permutation Synchronisation [71.27989298860481]
We address the non- optimisation problem of finding a matrix on the Stiefel manifold that maximises a quadratic objective function.
We propose a simple yet effective sparsity-promoting algorithm for finding the dominant eigenspace matrix.
arXiv Detail & Related papers (2021-09-30T19:17:35Z) - Nonlinear matrix recovery using optimization on the Grassmann manifold [18.655422834567577]
We investigate the problem of recovering a partially observed high-rank clustering matrix whose columns obey a nonlinear structure such as a union of subspaces.
We show that the alternating limit converges to a unique point using the Kurdyka-Lojasi property.
arXiv Detail & Related papers (2021-09-13T16:13:13Z) - Near-Optimal Algorithms for Linear Algebra in the Current Matrix
Multiplication Time [46.31710224483631]
We show how to bypass the main open question of Nelson and Nguyen (FOCS, 2013) regarding the logarithmic factors in the sketching dimension for existing constant factor approximation oblivious subspace embeddings.
A key technique we use is an explicit mapping of Indyk based on uncertainty principles and extractors.
For the fundamental problems of rank computation and finding a linearly independent subset of columns, our algorithms improve Cheung, Kwok, and Lau (JACM, 2013) and are optimal to within a constant factor and a $loglog(n)$-factor, respectively.
arXiv Detail & Related papers (2021-07-16T19:34:10Z) - Hashing embeddings of optimal dimension, with applications to linear
least squares [1.2891210250935143]
We present subspace embedding properties for $s$-hashing sketching matrices, with $sgeq 1$, that are optimal in the projection dimension $m$ of the sketch.
We apply these results to the special case of Linear Least Squares (LLS), and develop Ski-LLS, a generic software package for these problems.
arXiv Detail & Related papers (2021-05-25T10:35:13Z) - Solving weakly supervised regression problem using low-rank manifold
regularization [77.34726150561087]
We solve a weakly supervised regression problem.
Under "weakly" we understand that for some training points the labels are known, for some unknown, and for others uncertain due to the presence of random noise or other reasons such as lack of resources.
In the numerical section, we applied the suggested method to artificial and real datasets using Monte-Carlo modeling.
arXiv Detail & Related papers (2021-04-13T23:21:01Z) - Hybrid Trilinear and Bilinear Programming for Aligning Partially
Overlapping Point Sets [85.71360365315128]
In many applications, we need algorithms which can align partially overlapping point sets are invariant to the corresponding corresponding RPM algorithm.
We first show that the objective is a cubic bound function. We then utilize the convex envelopes of trilinear and bilinear monomial transformations to derive its lower bound.
We next develop a branch-and-bound (BnB) algorithm which only branches over the transformation variables and runs efficiently.
arXiv Detail & Related papers (2021-01-19T04:24:23Z) - Robust Low-rank Matrix Completion via an Alternating Manifold Proximal
Gradient Continuation Method [47.80060761046752]
Robust low-rank matrix completion (RMC) has been studied extensively for computer vision, signal processing and machine learning applications.
This problem aims to decompose a partially observed matrix into the superposition of a low-rank matrix and a sparse matrix, where the sparse matrix captures the grossly corrupted entries of the matrix.
A widely used approach to tackle RMC is to consider a convex formulation, which minimizes the nuclear norm of the low-rank matrix (to promote low-rankness) and the l1 norm of the sparse matrix (to promote sparsity).
In this paper, motivated by some recent works on low-
arXiv Detail & Related papers (2020-08-18T04:46:22Z) - Effective Dimension Adaptive Sketching Methods for Faster Regularized
Least-Squares Optimization [56.05635751529922]
We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching.
We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT)
arXiv Detail & Related papers (2020-06-10T15:00:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.