Nonlinear Transform Induced Tensor Nuclear Norm for Tensor Completion
- URL: http://arxiv.org/abs/2110.08774v1
- Date: Sun, 17 Oct 2021 09:25:37 GMT
- Title: Nonlinear Transform Induced Tensor Nuclear Norm for Tensor Completion
- Authors: Ben-Zheng Li, Xi-Le Zhao, Teng-Yu Ji, Xiong-Jun Zhang, and Ting-Zhu
Huang
- Abstract summary: We propose a low-rank tensor completion (LRTC) model along the theoretical convergence of the NTTNN and the PAM algorithm.
Our method outperforms linear transform-based state-of-the-art nuclear norm (TNN) methods qualitatively and quantitatively.
- Score: 12.788874164701785
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The linear transform-based tensor nuclear norm (TNN) methods have recently
obtained promising results for tensor completion. The main idea of this type of
methods is exploiting the low-rank structure of frontal slices of the targeted
tensor under the linear transform along the third mode. However, the
low-rankness of frontal slices is not significant under linear transforms
family. To better pursue the low-rank approximation, we propose a nonlinear
transform-based TNN (NTTNN). More concretely, the proposed nonlinear transform
is a composite transform consisting of the linear semi-orthogonal transform
along the third mode and the element-wise nonlinear transform on frontal slices
of the tensor under the linear semi-orthogonal transform, which are
indispensable and complementary in the composite transform to fully exploit the
underlying low-rankness. Based on the suggested low-rankness metric, i.e.,
NTTNN, we propose a low-rank tensor completion (LRTC) model. To tackle the
resulting nonlinear and nonconvex optimization model, we elaborately design the
proximal alternating minimization (PAM) algorithm and establish the theoretical
convergence guarantee of the PAM algorithm. Extensive experimental results on
hyperspectral images, multispectral images, and videos show that the our method
outperforms linear transform-based state-of-the-art LRTC methods qualitatively
and quantitatively.
Related papers
- Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Gradient Descent Provably Solves Nonlinear Tomographic Reconstruction [60.95625458395291]
In computed tomography (CT) the forward model consists of a linear transform followed by an exponential nonlinearity based on the attenuation of light according to the Beer-Lambert Law.
We show that this approach reduces metal artifacts compared to a commercial reconstruction of a human skull with metal crowns.
arXiv Detail & Related papers (2023-10-06T00:47:57Z) - A Theory of Topological Derivatives for Inverse Rendering of Geometry [87.49881303178061]
We introduce a theoretical framework for differentiable surface evolution that allows discrete topology changes through the use of topological derivatives.
We validate the proposed theory with optimization of closed curves in 2D and surfaces in 3D to lend insights into limitations of current methods.
arXiv Detail & Related papers (2023-08-19T00:55:55Z) - Tangent Transformers for Composition, Privacy and Removal [58.280295030852194]
Tangent Attention Fine-Tuning (TAFT) is a method for fine-tuning linearized transformers.
Tangent Attention Fine-Tuning (TAFT) is a method for fine-tuning linearized transformers.
arXiv Detail & Related papers (2023-07-16T18:31:25Z) - Tensor Factorization via Transformed Tensor-Tensor Product for Image
Alignment [3.0969191504482243]
We study the problem of a batch of linearly correlated image alignment, where the observed images are deformed by some unknown domain transformations.
By stacking these images as the frontal slices of a third-order tensor, we propose to explore the low-rankness of the underlying tensor.
arXiv Detail & Related papers (2022-12-12T05:52:26Z) - T$^2$LR-Net: An Unrolling Reconstruction Network Learning Transformed
Tensor Low-Rank prior for Dynamic MR Imaging [6.101233798770526]
We introduce a flexible model based on TTNN with the ability to exploit the tensor low-rank prior of a transformed domain.
We also introduce a model-based deep unrolling reconstruction network to learn the transformed tensor low-rank prior.
The proposed framework can provide improved recovery results compared with the state-of-the-art optimization-based and unrolling network-based methods.
arXiv Detail & Related papers (2022-09-08T14:11:02Z) - 2D+3D facial expression recognition via embedded tensor manifold
regularization [16.98176664818354]
A novel approach via embedded tensor manifold regularization for 2D+3D facial expression recognition (FERETMR) is proposed.
We establish the first-order optimality condition in terms of stationary points, and then design a block coordinate descent (BCD) algorithm with convergence analysis.
Numerical results on BU-3DFE database and Bosphorus databases demonstrate the effectiveness of our proposed approach.
arXiv Detail & Related papers (2022-01-29T06:11:00Z) - Self-Supervised Nonlinear Transform-Based Tensor Nuclear Norm for
Multi-Dimensional Image Recovery [27.34643415429293]
We propose a multilayer neural network to learn a nonlinear transform via the observed tensor data under self-supervision.
The proposed network makes use of low-rank representation of transformed tensors and data-fitting between the observed tensor and the reconstructed tensor to construct the nonlinear transformation.
arXiv Detail & Related papers (2021-05-29T14:56:51Z) - LQF: Linear Quadratic Fine-Tuning [114.3840147070712]
We present the first method for linearizing a pre-trained model that achieves comparable performance to non-linear fine-tuning.
LQF consists of simple modifications to the architecture, loss function and optimization typically used for classification.
arXiv Detail & Related papers (2020-12-21T06:40:20Z) - Multi-View Spectral Clustering Tailored Tensor Low-Rank Representation [105.33409035876691]
This paper explores the problem of multi-view spectral clustering (MVSC) based on tensor low-rank modeling.
We design a novel structured tensor low-rank norm tailored to MVSC.
We show that the proposed method outperforms state-of-the-art methods to a significant extent.
arXiv Detail & Related papers (2020-04-30T11:52:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.