Graph Regularized NMF with L20-norm for Unsupervised Feature Learning
- URL: http://arxiv.org/abs/2403.10910v1
- Date: Sat, 16 Mar 2024 12:10:01 GMT
- Title: Graph Regularized NMF with L20-norm for Unsupervised Feature Learning
- Authors: Zhen Wang, Wenwen Min,
- Abstract summary: Graph Regularized Non-negative Matrix Factorization (GNMF) is an extension of NMF that incorporates graph regularization constraints.
We propose an unsupervised feature learning framework based on GNMF and devise an algorithm based on PALM.
- Score: 6.894518335015327
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nonnegative Matrix Factorization (NMF) is a widely applied technique in the fields of machine learning and data mining. Graph Regularized Non-negative Matrix Factorization (GNMF) is an extension of NMF that incorporates graph regularization constraints. GNMF has demonstrated exceptional performance in clustering and dimensionality reduction, effectively discovering inherent low-dimensional structures embedded within high-dimensional spaces. However, the sensitivity of GNMF to noise limits its stability and robustness in practical applications. In order to enhance feature sparsity and mitigate the impact of noise while mining row sparsity patterns in the data for effective feature selection, we introduce the $\ell_{2,0}$-norm constraint as the sparsity constraints for GNMF. We propose an unsupervised feature learning framework based on GNMF\_$\ell_{20}$ and devise an algorithm based on PALM and its accelerated version to address this problem. Additionally, we establish the convergence of the proposed algorithms and validate the efficacy and superiority of our approach through experiments conducted on both simulated and real image data.
Related papers
- A Fresh Look at Generalized Category Discovery through Non-negative Matrix Factorization [83.12938977698988]
Generalized Category Discovery (GCD) aims to classify both base and novel images using labeled base data.
Current approaches inadequately address the intrinsic optimization of the co-occurrence matrix $barA$ based on cosine similarity.
We propose a Non-Negative Generalized Category Discovery (NN-GCD) framework to address these deficiencies.
arXiv Detail & Related papers (2024-10-29T07:24:11Z) - Achieving Constraints in Neural Networks: A Stochastic Augmented
Lagrangian Approach [49.1574468325115]
Regularizing Deep Neural Networks (DNNs) is essential for improving generalizability and preventing overfitting.
We propose a novel approach to DNN regularization by framing the training process as a constrained optimization problem.
We employ the Augmented Lagrangian (SAL) method to achieve a more flexible and efficient regularization mechanism.
arXiv Detail & Related papers (2023-10-25T13:55:35Z) - Adaptive Weighted Nonnegative Matrix Factorization for Robust Feature
Representation [9.844796520630522]
Nonnegative matrix factorization (NMF) has been widely used to dimensionality reduction in machine learning.
Traditional NMF does not properly handle outliers, so that it is sensitive to noise.
This paper proposes an adaptive weighted NMF, which introduces weights to emphasize the different importance of each data point.
arXiv Detail & Related papers (2022-06-07T05:27:08Z) - SymNMF-Net for The Symmetric NMF Problem [62.44067422984995]
We propose a neural network called SymNMF-Net for the Symmetric NMF problem.
We show that the inference of each block corresponds to a single iteration of the optimization.
Empirical results on real-world datasets demonstrate the superiority of our SymNMF-Net.
arXiv Detail & Related papers (2022-05-26T08:17:39Z) - On Feature Learning in Neural Networks with Global Convergence
Guarantees [49.870593940818715]
We study the optimization of wide neural networks (NNs) via gradient flow (GF)
We show that when the input dimension is no less than the size of the training set, the training loss converges to zero at a linear rate under GF.
We also show empirically that, unlike in the Neural Tangent Kernel (NTK) regime, our multi-layer model exhibits feature learning and can achieve better generalization performance than its NTK counterpart.
arXiv Detail & Related papers (2022-04-22T15:56:43Z) - Log-based Sparse Nonnegative Matrix Factorization for Data
Representation [55.72494900138061]
Nonnegative matrix factorization (NMF) has been widely studied in recent years due to its effectiveness in representing nonnegative data with parts-based representations.
We propose a new NMF method with log-norm imposed on the factor matrices to enhance the sparseness.
A novel column-wisely sparse norm, named $ell_2,log$-(pseudo) norm, is proposed to enhance the robustness of the proposed method.
arXiv Detail & Related papers (2022-04-22T11:38:10Z) - Robust Matrix Factorization with Grouping Effect [28.35582493230616]
We propose a novel method called Matrix Factorization with Grouping effect (GRMF)
The proposed GRMF can learn grouping structure and sparsity in MF without prior knowledge.
Experiments have been conducted using real-world data sets with outliers and contaminated noise.
arXiv Detail & Related papers (2021-06-25T15:03:52Z) - Self-supervised Symmetric Nonnegative Matrix Factorization [82.59905231819685]
Symmetric nonnegative factor matrix (SNMF) has demonstrated to be a powerful method for data clustering.
Inspired by ensemble clustering that aims to seek better clustering results, we propose self-supervised SNMF (S$3$NMF)
We take advantage of the sensitivity to code characteristic of SNMF, without relying on any additional information.
arXiv Detail & Related papers (2021-03-02T12:47:40Z) - Hyperspectral Unmixing via Nonnegative Matrix Factorization with
Handcrafted and Learnt Priors [14.032039261229853]
We propose an NMF based unmixing framework which jointly uses a handcrafting regularizer and a learnt regularizer from data.
We plug learnt priors of abundances where the associated subproblem can be addressed using various image denoisers.
arXiv Detail & Related papers (2020-10-09T14:40:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.