A Flexible Optimization Framework for Regularized Matrix-Tensor
Factorizations with Linear Couplings
- URL: http://arxiv.org/abs/2007.09605v1
- Date: Sun, 19 Jul 2020 06:49:59 GMT
- Title: A Flexible Optimization Framework for Regularized Matrix-Tensor
Factorizations with Linear Couplings
- Authors: Carla Schenker, Jeremy E. Cohen and Evrim Acar
- Abstract summary: We propose a flexible algorithmic framework for coupled matrix and tensor factorizations.
The framework facilitates the use of a variety of constraints, loss functions and couplings with linear transformations.
- Score: 5.079136838868448
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Coupled matrix and tensor factorizations (CMTF) are frequently used to
jointly analyze data from multiple sources, also called data fusion. However,
different characteristics of datasets stemming from multiple sources pose many
challenges in data fusion and require to employ various regularizations,
constraints, loss functions and different types of coupling structures between
datasets. In this paper, we propose a flexible algorithmic framework for
coupled matrix and tensor factorizations which utilizes Alternating
Optimization (AO) and the Alternating Direction Method of Multipliers (ADMM).
The framework facilitates the use of a variety of constraints, loss functions
and couplings with linear transformations in a seamless way. Numerical
experiments on simulated and real datasets demonstrate that the proposed
approach is accurate, and computationally efficient with comparable or better
performance than available CMTF methods for Frobenius norm loss, while being
more flexible. Using Kullback-Leibler divergence on count data, we demonstrate
that the algorithm yields accurate results also for other loss functions.
Related papers
- Induced Covariance for Causal Discovery in Linear Sparse Structures [55.2480439325792]
Causal models seek to unravel the cause-effect relationships among variables from observed data.
This paper introduces a novel causal discovery algorithm designed for settings in which variables exhibit linearly sparse relationships.
arXiv Detail & Related papers (2024-10-02T04:01:38Z) - AnyLoss: Transforming Classification Metrics into Loss Functions [21.34290540936501]
evaluation metrics can be used to assess the performance of models in binary classification tasks.
Most metrics are derived from a confusion matrix in a non-differentiable form, making it difficult to generate a differentiable loss function that could directly optimize them.
We propose a general-purpose approach that transforms any confusion matrix-based metric into a loss function, textitAnyLoss, that is available in optimization processes.
arXiv Detail & Related papers (2024-05-23T16:14:16Z) - Coseparable Nonnegative Tensor Factorization With T-CUR Decomposition [2.013220890731494]
Nonnegative Matrix Factorization (NMF) is an important unsupervised learning method to extract meaningful features from data.
In this work, we provide an alternating selection method to select the coseparable core.
The results demonstrate the efficiency of coseparable NTF when compared to coseparable NMF.
arXiv Detail & Related papers (2024-01-30T09:22:37Z) - Analysis and Optimization of Wireless Federated Learning with Data
Heterogeneity [72.85248553787538]
This paper focuses on performance analysis and optimization for wireless FL, considering data heterogeneity, combined with wireless resource allocation.
We formulate the loss function minimization problem, under constraints on long-term energy consumption and latency, and jointly optimize client scheduling, resource allocation, and the number of local training epochs (CRE)
Experiments on real-world datasets demonstrate that the proposed algorithm outperforms other benchmarks in terms of the learning accuracy and energy consumption.
arXiv Detail & Related papers (2023-08-04T04:18:01Z) - Quadratic Matrix Factorization with Applications to Manifold Learning [1.6795461001108094]
We propose a quadratic matrix factorization (QMF) framework to learn the curved manifold on which the dataset lies.
Algorithmically, we propose an alternating minimization algorithm to optimize QMF and establish its theoretical convergence properties.
Experiments on a synthetic manifold learning dataset and two real datasets, including the MNIST handwritten dataset and a cryogenic electron microscopy dataset, demonstrate the superiority of the proposed method over its competitors.
arXiv Detail & Related papers (2023-01-30T15:09:00Z) - PARAFAC2-based Coupled Matrix and Tensor Factorizations [1.7188280334580195]
We propose an algorithmic framework for fitting PARAFAC2-based CMTF models with the possibility of imposing various constraints on all modes and linear couplings.
Through numerical experiments, we demonstrate that the proposed algorithmic approach accurately recovers the underlying patterns using various constraints and linear couplings.
arXiv Detail & Related papers (2022-10-24T09:20:17Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - Relational Reasoning via Set Transformers: Provable Efficiency and
Applications to MARL [154.13105285663656]
A cooperative Multi-A gent R einforcement Learning (MARL) with permutation invariant agents framework has achieved tremendous empirical successes in real-world applications.
Unfortunately, the theoretical understanding of this MARL problem is lacking due to the curse of many agents and the limited exploration of the relational reasoning in existing works.
We prove that the suboptimality gaps of the model-free and model-based algorithms are independent of and logarithmic in the number of agents respectively, which mitigates the curse of many agents.
arXiv Detail & Related papers (2022-09-20T16:42:59Z) - Solving weakly supervised regression problem using low-rank manifold
regularization [77.34726150561087]
We solve a weakly supervised regression problem.
Under "weakly" we understand that for some training points the labels are known, for some unknown, and for others uncertain due to the presence of random noise or other reasons such as lack of resources.
In the numerical section, we applied the suggested method to artificial and real datasets using Monte-Carlo modeling.
arXiv Detail & Related papers (2021-04-13T23:21:01Z) - Feature Weighted Non-negative Matrix Factorization [92.45013716097753]
We propose the Feature weighted Non-negative Matrix Factorization (FNMF) in this paper.
FNMF learns the weights of features adaptively according to their importances.
It can be solved efficiently with the suggested optimization algorithm.
arXiv Detail & Related papers (2021-03-24T21:17:17Z) - Columnwise Element Selection for Computationally Efficient Nonnegative
Coupled Matrix Tensor Factorization [16.466065626950424]
Nonnegative CMTF (N-CMTF) has been employed in many applications for identifying latent patterns, prediction, and recommendation.
In this paper, a computationally efficient N-CMTF factorization algorithm is presented based on the column-wise element selection, preventing frequent gradient updates.
arXiv Detail & Related papers (2020-03-07T03:34:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.