Columnwise Element Selection for Computationally Efficient Nonnegative
Coupled Matrix Tensor Factorization
- URL: http://arxiv.org/abs/2003.03506v1
- Date: Sat, 7 Mar 2020 03:34:53 GMT
- Title: Columnwise Element Selection for Computationally Efficient Nonnegative
Coupled Matrix Tensor Factorization
- Authors: Thirunavukarasu Balasubramaniam, Richi Nayak, Chau Yuen
- Abstract summary: Nonnegative CMTF (N-CMTF) has been employed in many applications for identifying latent patterns, prediction, and recommendation.
In this paper, a computationally efficient N-CMTF factorization algorithm is presented based on the column-wise element selection, preventing frequent gradient updates.
- Score: 16.466065626950424
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Coupled Matrix Tensor Factorization (CMTF) facilitates the integration and
analysis of multiple data sources and helps discover meaningful information.
Nonnegative CMTF (N-CMTF) has been employed in many applications for
identifying latent patterns, prediction, and recommendation. However, due to
the added complexity with coupling between tensor and matrix data, existing
N-CMTF algorithms exhibit poor computation efficiency. In this paper, a
computationally efficient N-CMTF factorization algorithm is presented based on
the column-wise element selection, preventing frequent gradient updates.
Theoretical and empirical analyses show that the proposed N-CMTF factorization
algorithm is not only more accurate but also more computationally efficient
than existing algorithms in approximating the tensor as well as in identifying
the underlying nature of factors.
Related papers
- Coseparable Nonnegative Tensor Factorization With T-CUR Decomposition [2.013220890731494]
Nonnegative Matrix Factorization (NMF) is an important unsupervised learning method to extract meaningful features from data.
In this work, we provide an alternating selection method to select the coseparable core.
The results demonstrate the efficiency of coseparable NTF when compared to coseparable NMF.
arXiv Detail & Related papers (2024-01-30T09:22:37Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - Fast Learnings of Coupled Nonnegative Tensor Decomposition Using Optimal Gradient and Low-rank Approximation [7.265645216663691]
We introduce a novel coupled nonnegative CANDECOMP/PARAFAC decomposition algorithm optimized by the alternating gradient method (CoNCPD-APG)
By integrating low-rank approximation with the proposed CoNCPD-APG method, the proposed algorithm can significantly decrease the computational burden without compromising decomposition quality.
arXiv Detail & Related papers (2023-02-10T08:49:36Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Non-Negative Matrix Factorization with Scale Data Structure Preservation [23.31865419578237]
The model described in this paper belongs to the family of non-negative matrix factorization methods designed for data representation and dimension reduction.
The idea is to add, to the NMF cost function, a penalty term to impose a scale relationship between the pairwise similarity matrices of the original and transformed data points.
The proposed clustering algorithm is compared to some existing NMF-based algorithms and to some manifold learning-based algorithms when applied to some real-life datasets.
arXiv Detail & Related papers (2022-09-22T09:32:18Z) - Federated Learning via Inexact ADMM [46.99210047518554]
In this paper, we develop an inexact alternating direction method of multipliers (ADMM)
It is both- and communication-efficient, capable of combating the stragglers' effect, and convergent under mild conditions.
It has a high numerical performance compared with several state-of-the-art algorithms for federated learning.
arXiv Detail & Related papers (2022-04-22T09:55:33Z) - AsySQN: Faster Vertical Federated Learning Algorithms with Better
Computation Resource Utilization [159.75564904944707]
We propose an asynchronous quasi-Newton (AsySQN) framework for vertical federated learning (VFL)
The proposed algorithms make descent steps scaled by approximate without calculating the inverse Hessian matrix explicitly.
We show that the adopted asynchronous computation can make better use of the computation resource.
arXiv Detail & Related papers (2021-09-26T07:56:10Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - A Flexible Optimization Framework for Regularized Matrix-Tensor
Factorizations with Linear Couplings [5.079136838868448]
We propose a flexible algorithmic framework for coupled matrix and tensor factorizations.
The framework facilitates the use of a variety of constraints, loss functions and couplings with linear transformations.
arXiv Detail & Related papers (2020-07-19T06:49:59Z) - Efficient Nonnegative Tensor Factorization via Saturating Coordinate
Descent [16.466065626950424]
We propose a novel fast and efficient NTF algorithm using the element selection approach.
Empirical analysis reveals that the proposed algorithm is scalable in terms of tensor size, density, and rank.
arXiv Detail & Related papers (2020-03-07T12:51:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.