A consistent and flexible framework for deep matrix factorizations
- URL: http://arxiv.org/abs/2206.10693v1
- Date: Tue, 21 Jun 2022 19:20:35 GMT
- Title: A consistent and flexible framework for deep matrix factorizations
- Authors: Pierre De Handschutter, Nicolas Gillis
- Abstract summary: We introduce two meaningful loss functions for deep MF and present a generic framework to solve the corresponding optimization problems.
The models are successfully applied on both synthetic and real data, namely for hyperspectral unmixing and extraction of facial features.
- Score: 17.49766938060264
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep matrix factorizations (deep MFs) are recent unsupervised data mining
techniques inspired by constrained low-rank approximations. They aim to extract
complex hierarchies of features within high-dimensional datasets. Most of the
loss functions proposed in the literature to evaluate the quality of deep MF
models and the underlying optimization frameworks are not consistent because
different losses are used at different layers. In this paper, we introduce two
meaningful loss functions for deep MF and present a generic framework to solve
the corresponding optimization problems. We illustrate the effectiveness of
this approach through the integration of various constraints and
regularizations, such as sparsity, nonnegativity and minimum-volume. The models
are successfully applied on both synthetic and real data, namely for
hyperspectral unmixing and extraction of facial features.
Related papers
- Layer Separation Deep Learning Model with Auxiliary Variables for Partial Differential Equations [0.6215404942415159]
We propose a new framework, the layer separation (LySep) model, to improve the deep learning-based methods in partial equations.<n>We provide theoretical analyses demonstrating the advantages of LySep in minimizing loss and reducing solution error.
arXiv Detail & Related papers (2025-07-17T03:43:18Z) - CLIP Meets Diffusion: A Synergistic Approach to Anomaly Detection [54.85000884785013]
Anomaly detection is a complex problem due to the ambiguity in defining anomalies, the diversity of anomaly types, and the scarcity of training data.<n>We propose CLIPfusion, a method that leverages both discriminative and generative foundation models.<n>We believe that our method underscores the effectiveness of multi-modal and multi-model fusion in tackling the multifaceted challenges of anomaly detection.
arXiv Detail & Related papers (2025-06-13T13:30:15Z) - NDCG-Consistent Softmax Approximation with Accelerated Convergence [67.10365329542365]
We propose novel loss formulations that align directly with ranking metrics.<n>We integrate the proposed RG losses with the highly efficient Alternating Least Squares (ALS) optimization method.<n> Empirical evaluations on real-world datasets demonstrate that our approach achieves comparable or superior ranking performance.
arXiv Detail & Related papers (2025-06-11T06:59:17Z) - You Only Train Once [11.97836331714694]
You Only Train Once (YOTO) contributes to limiting training to one shot for the latter aspect of losses selection and weighting.<n>We leverage the differentiability of the composite loss formulation which is widely used for optimizing multiple empirical losses simultaneously.<n>We show that YOTO consistently outperforms the best grid-search model on unseen test data.
arXiv Detail & Related papers (2025-06-04T18:04:58Z) - Pushing the Limits of Large Language Model Quantization via the Linearity Theorem [71.3332971315821]
We present a "line theoremarity" establishing a direct relationship between the layer-wise $ell$ reconstruction error and the model perplexity increase due to quantization.
This insight enables two novel applications: (1) a simple data-free LLM quantization method using Hadamard rotations and MSE-optimal grids, dubbed HIGGS, and (2) an optimal solution to the problem of finding non-uniform per-layer quantization levels.
arXiv Detail & Related papers (2024-11-26T15:35:44Z) - Towards a Fairer Non-negative Matrix Factorization [6.069820038869034]
We investigate how Non-negative Matrix Factorization (NMF) can introduce bias in the representation of data groups.
We present an approach, called Fairer-NMF, that seeks to minimize the maximum reconstruction loss for different groups.
arXiv Detail & Related papers (2024-11-14T23:34:38Z) - Towards Robust Out-of-Distribution Generalization: Data Augmentation and Neural Architecture Search Approaches [4.577842191730992]
We study ways toward robust OoD generalization for deep learning.
We first propose a novel and effective approach to disentangle the spurious correlation between features that are not essential for recognition.
We then study the problem of strengthening neural architecture search in OoD scenarios.
arXiv Detail & Related papers (2024-10-25T20:50:32Z) - Tailed Low-Rank Matrix Factorization for Similarity Matrix Completion [14.542166904874147]
Similarity Completion Matrix serves as a fundamental tool at the core of numerous machinelearning tasks.
To address this issue, Similarity Matrix Theoretical (SMC) methods have been proposed, but they suffer complexity.
We present two novel, scalable, and effective algorithms, which investigate the PSD property to guide the estimation process and incorporate non low-rank regularizer to ensure the low-rank solution.
arXiv Detail & Related papers (2024-09-29T04:27:23Z) - Exponentially Convergent Algorithms for Supervised Matrix Factorization [2.1485350418225244]
Supervised factorization (SMF) is a machine learning method that converges extraction and classification tasks.
Our paper provides a novel framework that 'lifts' SMF as a low-rank estimation problem in a combined factor space estimation.
arXiv Detail & Related papers (2023-11-18T23:24:02Z) - ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language
Models [70.45441031021291]
Large Vision-Language Models (LVLMs) can understand the world comprehensively by integrating rich information from different modalities.
LVLMs are often problematic due to their massive computational/energy costs and carbon consumption.
We propose Efficient Coarse-to-Fine LayerWise Pruning (ECoFLaP), a two-stage coarse-to-fine weight pruning approach for LVLMs.
arXiv Detail & Related papers (2023-10-04T17:34:00Z) - A Novel Maximum-Entropy-Driven Technique for Low-Rank Orthogonal
Nonnegative Matrix Factorization with $\ell_0$-Norm sparsity Constraint [0.0]
In data-driven control and machine learning, a common requirement involves breaking down large matrices into smaller, low-rank factors.
This paper introduces an innovative solution to the orthogonal nonnegative matrix factorization (ONMF) problem.
The proposed method achieves comparable or improved reconstruction errors in line with the literature.
arXiv Detail & Related papers (2022-10-06T04:30:59Z) - InverseForm: A Loss Function for Structured Boundary-Aware Segmentation [80.39674800972182]
We present a novel boundary-aware loss term for semantic segmentation using an inverse-transformation network.
This plug-in loss term complements the cross-entropy loss in capturing boundary transformations.
We analyze the quantitative and qualitative effects of our loss function on three indoor and outdoor segmentation benchmarks.
arXiv Detail & Related papers (2021-04-06T18:52:45Z) - Feature Weighted Non-negative Matrix Factorization [92.45013716097753]
We propose the Feature weighted Non-negative Matrix Factorization (FNMF) in this paper.
FNMF learns the weights of features adaptively according to their importances.
It can be solved efficiently with the suggested optimization algorithm.
arXiv Detail & Related papers (2021-03-24T21:17:17Z) - Entropy Minimizing Matrix Factorization [102.26446204624885]
Nonnegative Matrix Factorization (NMF) is a widely-used data analysis technique, and has yielded impressive results in many real-world tasks.
In this study, an Entropy Minimizing Matrix Factorization framework (EMMF) is developed to tackle the above problem.
Considering that the outliers are usually much less than the normal samples, a new entropy loss function is established for matrix factorization.
arXiv Detail & Related papers (2021-03-24T21:08:43Z) - High Dimensional Level Set Estimation with Bayesian Neural Network [58.684954492439424]
This paper proposes novel methods to solve the high dimensional Level Set Estimation problems using Bayesian Neural Networks.
For each problem, we derive the corresponding theoretic information based acquisition function to sample the data points.
Numerical experiments on both synthetic and real-world datasets show that our proposed method can achieve better results compared to existing state-of-the-art approaches.
arXiv Detail & Related papers (2020-12-17T23:21:53Z) - Deep matrix factorizations [16.33338100088249]
Deep matrix factorization (deep MF) was introduced to deal with the extraction of several layers of features.
This paper presents the main models, algorithms, and applications of deep MF through a comprehensive literature review.
arXiv Detail & Related papers (2020-10-01T13:19:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.