Adversarially-Trained Nonnegative Matrix Factorization
- URL: http://arxiv.org/abs/2104.04757v1
- Date: Sat, 10 Apr 2021 13:13:17 GMT
- Title: Adversarially-Trained Nonnegative Matrix Factorization
- Authors: Ting Cai, Vincent Y. F. Tan, C\'edric F\'evotte
- Abstract summary: We consider an adversarially-trained version of the nonnegative matrix factorization.
In our formulation, an attacker adds an arbitrary matrix of bounded norm to the given data matrix.
We design efficient algorithms inspired by adversarial training to optimize for dictionary and coefficient matrices.
- Score: 77.34726150561087
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider an adversarially-trained version of the nonnegative matrix
factorization, a popular latent dimensionality reduction technique. In our
formulation, an attacker adds an arbitrary matrix of bounded norm to the given
data matrix. We design efficient algorithms inspired by adversarial training to
optimize for dictionary and coefficient matrices with enhanced generalization
abilities. Extensive simulations on synthetic and benchmark datasets
demonstrate the superior predictive performance on matrix completion tasks of
our proposed method compared to state-of-the-art competitors, including other
variants of adversarial nonnegative matrix factorization.
Related papers
- Guarantees of a Preconditioned Subgradient Algorithm for Overparameterized Asymmetric Low-rank Matrix Recovery [8.722715843502321]
We focus on a matrix factorization-based approach for robust low-rank and asymmetric matrix recovery from corrupted measurements.
We propose a subgradient algorithm that inherits the merits of preconditioned algorithms, whose rate of convergence does not depend on the condition number of the sought matrix.
arXiv Detail & Related papers (2024-10-22T08:58:44Z) - Continuous Semi-Supervised Nonnegative Matrix Factorization [8.303018940526417]
Nonnegative matrix factorization can be used to automatically detect topics within a corpus in an unsupervised fashion.
We show this factorization can be combined with regression on a continuous response variable.
arXiv Detail & Related papers (2022-12-19T21:07:27Z) - A Generalized Latent Factor Model Approach to Mixed-data Matrix
Completion with Entrywise Consistency [3.299672391663527]
Matrix completion is a class of machine learning methods that concerns the prediction of missing entries in a partially observed matrix.
We formulate it as a low-rank matrix estimation problem under a general family of non-linear factor models.
We propose entrywise consistent estimators for estimating the low-rank matrix.
arXiv Detail & Related papers (2022-11-17T00:24:47Z) - A Novel Maximum-Entropy-Driven Technique for Low-Rank Orthogonal
Nonnegative Matrix Factorization with $\ell_0$-Norm sparsity Constraint [0.0]
In data-driven control and machine learning, a common requirement involves breaking down large matrices into smaller, low-rank factors.
This paper introduces an innovative solution to the orthogonal nonnegative matrix factorization (ONMF) problem.
The proposed method achieves comparable or improved reconstruction errors in line with the literature.
arXiv Detail & Related papers (2022-10-06T04:30:59Z) - Variance Reduction for Matrix Computations with Applications to Gaussian
Processes [0.0]
We focus on variance reduction for matrix computations via matrix factorization.
We show how computing the square root factorization of the matrix can achieve in some important cases arbitrarily better performance.
arXiv Detail & Related papers (2021-06-28T10:41:22Z) - Self-supervised Symmetric Nonnegative Matrix Factorization [82.59905231819685]
Symmetric nonnegative factor matrix (SNMF) has demonstrated to be a powerful method for data clustering.
Inspired by ensemble clustering that aims to seek better clustering results, we propose self-supervised SNMF (S$3$NMF)
We take advantage of the sensitivity to code characteristic of SNMF, without relying on any additional information.
arXiv Detail & Related papers (2021-03-02T12:47:40Z) - Multi-View Spectral Clustering with High-Order Optimal Neighborhood
Laplacian Matrix [57.11971786407279]
Multi-view spectral clustering can effectively reveal the intrinsic cluster structure among data.
This paper proposes a multi-view spectral clustering algorithm that learns a high-order optimal neighborhood Laplacian matrix.
Our proposed algorithm generates the optimal Laplacian matrix by searching the neighborhood of the linear combination of both the first-order and high-order base.
arXiv Detail & Related papers (2020-08-31T12:28:40Z) - A Scalable, Adaptive and Sound Nonconvex Regularizer for Low-rank Matrix
Completion [60.52730146391456]
We propose a new non scalable low-rank regularizer called "nuclear Frobenius norm" regularizer, which is adaptive and sound.
It bypasses the computation of singular values and allows fast optimization by algorithms.
It obtains state-of-the-art recovery performance while being the fastest in existing matrix learning methods.
arXiv Detail & Related papers (2020-08-14T18:47:58Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Asymptotic Analysis of an Ensemble of Randomly Projected Linear
Discriminants [94.46276668068327]
In [1], an ensemble of randomly projected linear discriminants is used to classify datasets.
We develop a consistent estimator of the misclassification probability as an alternative to the computationally-costly cross-validation estimator.
We also demonstrate the use of our estimator for tuning the projection dimension on both real and synthetic data.
arXiv Detail & Related papers (2020-04-17T12:47:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.