DeepTensor: Low-Rank Tensor Decomposition with Deep Network Priors
- URL: http://arxiv.org/abs/2204.03145v1
- Date: Thu, 7 Apr 2022 01:09:58 GMT
- Title: DeepTensor: Low-Rank Tensor Decomposition with Deep Network Priors
- Authors: Vishwanath Saragadam, Randall Balestriero, Ashok Veeraraghavan,
Richard G. Baraniuk
- Abstract summary: DeepTensor is a framework for low-rank decomposition of matrices and tensors using deep generative networks.
We explore a range of real-world applications, including hyperspectral image denoising, 3D MRI tomography, and image classification.
- Score: 45.183204988990916
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: DeepTensor is a computationally efficient framework for low-rank
decomposition of matrices and tensors using deep generative networks. We
decompose a tensor as the product of low-rank tensor factors (e.g., a matrix as
the outer product of two vectors), where each low-rank tensor is generated by a
deep network (DN) that is trained in a self-supervised manner to minimize the
mean-squared approximation error. Our key observation is that the implicit
regularization inherent in DNs enables them to capture nonlinear signal
structures (e.g., manifolds) that are out of the reach of classical linear
methods like the singular value decomposition (SVD) and principal component
analysis (PCA). Furthermore, in contrast to the SVD and PCA, whose performance
deteriorates when the tensor's entries deviate from additive white Gaussian
noise, we demonstrate that the performance of DeepTensor is robust to a wide
range of distributions. We validate that DeepTensor is a robust and
computationally efficient drop-in replacement for the SVD, PCA, nonnegative
matrix factorization (NMF), and similar decompositions by exploring a range of
real-world applications, including hyperspectral image denoising, 3D MRI
tomography, and image classification. In particular, DeepTensor offers a 6dB
signal-to-noise ratio improvement over standard denoising methods for signals
corrupted by Poisson noise and learns to decompose 3D tensors 60 times faster
than a single DN equipped with 3D convolutions.
Related papers
- Low-Multi-Rank High-Order Bayesian Robust Tensor Factorization [7.538654977500241]
We propose a novel high-order TRPCA method, named as Low-Multi-rank High-order Robust Factorization (LMH-BRTF) within the Bayesian framework.
Specifically, we decompose the observed corrupted tensor into three parts, i.e., the low-rank component, the sparse component, and the noise component.
By constructing a low-rank model for the low-rank component based on the order-$d$ t-SVD, LMH-BRTF can automatically determine the tensor multi-rank.
arXiv Detail & Related papers (2023-11-10T06:15:38Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - Latent Matrices for Tensor Network Decomposition and to Tensor
Completion [8.301418317685906]
We propose a novel higher-order tensor decomposition model that decomposes the tensor into smaller ones and speeds up the computation of the algorithm.
Three optimization algorithms, LMTN-PAM, LMTN-SVD and LMTN-AR, have been developed and applied to the tensor-completion task.
Experimental results show that our LMTN-SVD algorithm is 3-6 times faster than the FCTN-PAM algorithm and only a 1.8 points accuracy drop.
arXiv Detail & Related papers (2022-10-07T08:19:50Z) - Orthogonal Matrix Retrieval with Spatial Consensus for 3D Unknown-View
Tomography [58.60249163402822]
Unknown-view tomography (UVT) reconstructs a 3D density map from its 2D projections at unknown, random orientations.
The proposed OMR is more robust and performs significantly better than the previous state-of-the-art OMR approach.
arXiv Detail & Related papers (2022-07-06T21:40:59Z) - Truncated tensor Schatten p-norm based approach for spatiotemporal
traffic data imputation with complicated missing patterns [77.34726150561087]
We introduce four complicated missing patterns, including missing and three fiber-like missing cases according to the mode-drivenn fibers.
Despite nonity of the objective function in our model, we derive the optimal solutions by integrating alternating data-mputation method of multipliers.
arXiv Detail & Related papers (2022-05-19T08:37:56Z) - 2D+3D facial expression recognition via embedded tensor manifold
regularization [16.98176664818354]
A novel approach via embedded tensor manifold regularization for 2D+3D facial expression recognition (FERETMR) is proposed.
We establish the first-order optimality condition in terms of stationary points, and then design a block coordinate descent (BCD) algorithm with convergence analysis.
Numerical results on BU-3DFE database and Bosphorus databases demonstrate the effectiveness of our proposed approach.
arXiv Detail & Related papers (2022-01-29T06:11:00Z) - Augmented Tensor Decomposition with Stochastic Optimization [46.16865811396394]
Real-world tensor data are usually high-ordered and have large dimensions with millions or billions of entries.
It is expensive to decompose the whole tensor with traditional algorithms.
This paper proposes augmented tensor decomposition, which effectively incorporates data augmentations to boost downstream classification.
arXiv Detail & Related papers (2021-06-15T06:29:05Z) - Stable Low-rank Tensor Decomposition for Compression of Convolutional
Neural Network [19.717842489217684]
This paper is the first study on degeneracy in the tensor decomposition of convolutional kernels.
We present a novel method, which can stabilize the low-rank approximation of convolutional kernels and ensure efficient compression.
We evaluate our approach on popular CNN architectures for image classification and show that our method results in much lower accuracy degradation and provides consistent performance.
arXiv Detail & Related papers (2020-08-12T17:10:12Z) - Robust Tensor Principal Component Analysis: Exact Recovery via
Deterministic Model [5.414544833902815]
This paper proposes a new method to analyze Robust tensor principal component analysis (RTPCA)
It is based on the recently developed tensor-tensor product and tensor singular value decomposition (t-SVD)
arXiv Detail & Related papers (2020-08-05T16:26:10Z) - Learning Low-rank Deep Neural Networks via Singular Vector Orthogonality
Regularization and Singular Value Sparsification [53.50708351813565]
We propose SVD training, the first method to explicitly achieve low-rank DNNs during training without applying SVD on every step.
We empirically show that SVD training can significantly reduce the rank of DNN layers and achieve higher reduction on computation load under the same accuracy.
arXiv Detail & Related papers (2020-04-20T02:40:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.