Nonlinear Transform Induced Tensor Nuclear Norm for Tensor Completion
- URL: http://arxiv.org/abs/2110.08774v1
- Date: Sun, 17 Oct 2021 09:25:37 GMT
- Title: Nonlinear Transform Induced Tensor Nuclear Norm for Tensor Completion
- Authors: Ben-Zheng Li, Xi-Le Zhao, Teng-Yu Ji, Xiong-Jun Zhang, and Ting-Zhu
Huang
- Abstract summary: We propose a low-rank tensor completion (LRTC) model along the theoretical convergence of the NTTNN and the PAM algorithm.
Our method outperforms linear transform-based state-of-the-art nuclear norm (TNN) methods qualitatively and quantitatively.
- Score: 12.788874164701785
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The linear transform-based tensor nuclear norm (TNN) methods have recently
obtained promising results for tensor completion. The main idea of this type of
methods is exploiting the low-rank structure of frontal slices of the targeted
tensor under the linear transform along the third mode. However, the
low-rankness of frontal slices is not significant under linear transforms
family. To better pursue the low-rank approximation, we propose a nonlinear
transform-based TNN (NTTNN). More concretely, the proposed nonlinear transform
is a composite transform consisting of the linear semi-orthogonal transform
along the third mode and the element-wise nonlinear transform on frontal slices
of the tensor under the linear semi-orthogonal transform, which are
indispensable and complementary in the composite transform to fully exploit the
underlying low-rankness. Based on the suggested low-rankness metric, i.e.,
NTTNN, we propose a low-rank tensor completion (LRTC) model. To tackle the
resulting nonlinear and nonconvex optimization model, we elaborately design the
proximal alternating minimization (PAM) algorithm and establish the theoretical
convergence guarantee of the PAM algorithm. Extensive experimental results on
hyperspectral images, multispectral images, and videos show that the our method
outperforms linear transform-based state-of-the-art LRTC methods qualitatively
and quantitatively.
Related papers
- On Disentangled Training for Nonlinear Transform in Learned Image Compression [59.66885464492666]
Learned image compression (LIC) has demonstrated superior rate-distortion (R-D) performance compared to traditional codecs.
Existing LIC methods overlook the slow convergence caused by compacting energy in learning nonlinear transforms.
We propose a linear auxiliary transform (AuxT) to disentangle energy compaction in training nonlinear transforms.
arXiv Detail & Related papers (2025-01-23T15:32:06Z) - OTLRM: Orthogonal Learning-based Low-Rank Metric for Multi-Dimensional Inverse Problems [14.893020063373022]
We introduce a novel data-driven generative low-rank t-SVD model based on the learnable orthogonal transform.
We also propose a low-rank solver as a generalization of SVT, which utilizes an efficient representation of generative networks to obtain low-rank structures.
arXiv Detail & Related papers (2024-12-15T12:28:57Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Gradient Descent Provably Solves Nonlinear Tomographic Reconstruction [60.95625458395291]
In computed tomography (CT) the forward model consists of a linear transform followed by an exponential nonlinearity based on the attenuation of light according to the Beer-Lambert Law.
We show that this approach reduces metal artifacts compared to a commercial reconstruction of a human skull with metal crowns.
arXiv Detail & Related papers (2023-10-06T00:47:57Z) - A Theory of Topological Derivatives for Inverse Rendering of Geometry [87.49881303178061]
We introduce a theoretical framework for differentiable surface evolution that allows discrete topology changes through the use of topological derivatives.
We validate the proposed theory with optimization of closed curves in 2D and surfaces in 3D to lend insights into limitations of current methods.
arXiv Detail & Related papers (2023-08-19T00:55:55Z) - Tensor Factorization via Transformed Tensor-Tensor Product for Image
Alignment [3.0969191504482243]
We study the problem of a batch of linearly correlated image alignment, where the observed images are deformed by some unknown domain transformations.
By stacking these images as the frontal slices of a third-order tensor, we propose to explore the low-rankness of the underlying tensor.
arXiv Detail & Related papers (2022-12-12T05:52:26Z) - Self-Supervised Nonlinear Transform-Based Tensor Nuclear Norm for
Multi-Dimensional Image Recovery [27.34643415429293]
We propose a multilayer neural network to learn a nonlinear transform via the observed tensor data under self-supervision.
The proposed network makes use of low-rank representation of transformed tensors and data-fitting between the observed tensor and the reconstructed tensor to construct the nonlinear transformation.
arXiv Detail & Related papers (2021-05-29T14:56:51Z) - LQF: Linear Quadratic Fine-Tuning [114.3840147070712]
We present the first method for linearizing a pre-trained model that achieves comparable performance to non-linear fine-tuning.
LQF consists of simple modifications to the architecture, loss function and optimization typically used for classification.
arXiv Detail & Related papers (2020-12-21T06:40:20Z) - Multi-View Spectral Clustering Tailored Tensor Low-Rank Representation [105.33409035876691]
This paper explores the problem of multi-view spectral clustering (MVSC) based on tensor low-rank modeling.
We design a novel structured tensor low-rank norm tailored to MVSC.
We show that the proposed method outperforms state-of-the-art methods to a significant extent.
arXiv Detail & Related papers (2020-04-30T11:52:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.