Deep matrix factorizations
- URL: http://arxiv.org/abs/2010.00380v2
- Date: Sat, 3 Oct 2020 07:31:30 GMT
- Title: Deep matrix factorizations
- Authors: Pierre De Handschutter, Nicolas Gillis, Xavier Siebert
- Abstract summary: Deep matrix factorization (deep MF) was introduced to deal with the extraction of several layers of features.
This paper presents the main models, algorithms, and applications of deep MF through a comprehensive literature review.
- Score: 16.33338100088249
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Constrained low-rank matrix approximations have been known for decades as
powerful linear dimensionality reduction techniques to be able to extract the
information contained in large data sets in a relevant way. However, such
low-rank approaches are unable to mine complex, interleaved features that
underlie hierarchical semantics. Recently, deep matrix factorization (deep MF)
was introduced to deal with the extraction of several layers of features and
has been shown to reach outstanding performances on unsupervised tasks. Deep MF
was motivated by the success of deep learning, as it is conceptually close to
some neural networks paradigms. In this paper, we present the main models,
algorithms, and applications of deep MF through a comprehensive literature
review. We also discuss theoretical questions and perspectives of research.
Related papers
- The Computational Advantage of Depth: Learning High-Dimensional Hierarchical Functions with Gradient Descent [28.999394988111106]
We introduce a class of target functions that incorporate a hierarchy of latent subspace dimensionalities.
Our main theorem shows that feature learning with gradient descent reduces the effective dimensionality.
These findings open the way to further quantitative studies of the crucial role of depth in learning hierarchical structures with deep networks.
arXiv Detail & Related papers (2025-02-19T18:58:28Z) - Semantics-Oriented Multitask Learning for DeepFake Detection: A Joint Embedding Approach [77.65459419417533]
We propose an automatic dataset expansion technique to support semantics-oriented DeepFake detection tasks.
We also resort to joint embedding of face images and their corresponding labels for prediction.
Our method improves the generalizability of DeepFake detection and renders some degree of model interpretation by providing human-understandable explanations.
arXiv Detail & Related papers (2024-08-29T07:11:50Z) - Coding for Intelligence from the Perspective of Category [66.14012258680992]
Coding targets compressing and reconstructing data, and intelligence.
Recent trends demonstrate the potential homogeneity of these two fields.
We propose a novel problem of Coding for Intelligence from the category theory view.
arXiv Detail & Related papers (2024-07-01T07:05:44Z) - Exploring Concept Depth: How Large Language Models Acquire Knowledge at Different Layers? [57.04803703952721]
Large language models (LLMs) have shown remarkable performances across a wide range of tasks.
However, the mechanisms by which these models encode tasks of varying complexities remain poorly understood.
We introduce the idea of Concept Depth'' to suggest that more complex concepts are typically acquired in deeper layers.
arXiv Detail & Related papers (2024-04-10T14:56:40Z) - Exponentially Convergent Algorithms for Supervised Matrix Factorization [2.1485350418225244]
Supervised factorization (SMF) is a machine learning method that converges extraction and classification tasks.
Our paper provides a novel framework that 'lifts' SMF as a low-rank estimation problem in a combined factor space estimation.
arXiv Detail & Related papers (2023-11-18T23:24:02Z) - Deep Nonnegative Matrix Factorization with Beta Divergences [14.639457874288412]
We develop new models and algorithms for deep NMF using some $beta$-divergences.
We apply these techniques to the extraction of facial features, the identification of topics within document collections, and the identification of materials within hyperspectral images.
arXiv Detail & Related papers (2023-09-15T08:46:53Z) - Higher-order topological kernels via quantum computation [68.8204255655161]
Topological data analysis (TDA) has emerged as a powerful tool for extracting meaningful insights from complex data.
We propose a quantum approach to defining Betti kernels, which is based on constructing Betti curves with increasing order.
arXiv Detail & Related papers (2023-07-14T14:48:52Z) - Understanding Masked Autoencoders via Hierarchical Latent Variable
Models [109.35382136147349]
Masked autoencoder (MAE) has recently achieved prominent success in a variety of vision tasks.
Despite the emergence of intriguing empirical observations on MAE, a theoretically principled understanding is still lacking.
arXiv Detail & Related papers (2023-06-08T03:00:10Z) - Deep networks for system identification: a Survey [56.34005280792013]
System identification learns mathematical descriptions of dynamic systems from input-output data.
Main aim of the identified model is to predict new data from previous observations.
We discuss architectures commonly adopted in the literature, like feedforward, convolutional, and recurrent networks.
arXiv Detail & Related papers (2023-01-30T12:38:31Z) - Unitary Approximate Message Passing for Matrix Factorization [90.84906091118084]
We consider matrix factorization (MF) with certain constraints, which finds wide applications in various areas.
We develop a Bayesian approach to MF with an efficient message passing implementation, called UAMPMF.
We show that UAMPMF significantly outperforms state-of-the-art algorithms in terms of recovery accuracy, robustness and computational complexity.
arXiv Detail & Related papers (2022-07-31T12:09:32Z) - A consistent and flexible framework for deep matrix factorizations [17.49766938060264]
We introduce two meaningful loss functions for deep MF and present a generic framework to solve the corresponding optimization problems.
The models are successfully applied on both synthetic and real data, namely for hyperspectral unmixing and extraction of facial features.
arXiv Detail & Related papers (2022-06-21T19:20:35Z) - NN2Poly: A polynomial representation for deep feed-forward artificial
neural networks [0.6502001911298337]
NN2Poly is a theoretical approach to obtain an explicit model of an already trained fully-connected feed-forward artificial neural network.
This approach extends a previous idea proposed in the literature, which was limited to single hidden layer networks.
arXiv Detail & Related papers (2021-12-21T17:55:22Z) - Partially Shared Semi-supervised Deep Matrix Factorization with
Multi-view Data [3.198381558122369]
We present a partially shared semi-supervised deep matrix factorization model (PSDMF)
By integrating the partially shared deep decomposition structure, graph regularization and the semi-supervised regression model, PSDMF can learn a compact and efficient discriminative representation.
Experiments on five benchmark datasets demonstrate that PSDMF can achieve better performance than the state-of-the-art multi-view learning approaches.
arXiv Detail & Related papers (2020-12-02T06:59:41Z) - Solving Sparse Linear Inverse Problems in Communication Systems: A Deep
Learning Approach With Adaptive Depth [51.40441097625201]
We propose an end-to-end trainable deep learning architecture for sparse signal recovery problems.
The proposed method learns how many layers to execute to emit an output, and the network depth is dynamically adjusted for each task in the inference phase.
arXiv Detail & Related papers (2020-10-29T06:32:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.