Tensor Factorization via Transformed Tensor-Tensor Product for Image
Alignment
- URL: http://arxiv.org/abs/2212.05719v2
- Date: Tue, 13 Dec 2022 10:43:04 GMT
- Title: Tensor Factorization via Transformed Tensor-Tensor Product for Image
Alignment
- Authors: Sijia Xia, Duo Qiu, and Xiongjun Zhang
- Abstract summary: We study the problem of a batch of linearly correlated image alignment, where the observed images are deformed by some unknown domain transformations.
By stacking these images as the frontal slices of a third-order tensor, we propose to explore the low-rankness of the underlying tensor.
- Score: 3.0969191504482243
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this paper, we study the problem of a batch of linearly correlated image
alignment, where the observed images are deformed by some unknown domain
transformations, and corrupted by additive Gaussian noise and sparse noise
simultaneously. By stacking these images as the frontal slices of a third-order
tensor, we propose to utilize the tensor factorization method via transformed
tensor-tensor product to explore the low-rankness of the underlying tensor,
which is factorized into the product of two smaller tensors via transformed
tensor-tensor product under any unitary transformation. The main advantage of
transformed tensor-tensor product is that its computational complexity is lower
compared with the existing literature based on transformed tensor nuclear norm.
Moreover, the tensor $\ell_p$ $(0<p<1)$ norm is employed to characterize the
sparsity of sparse noise and the tensor Frobenius norm is adopted to model
additive Gaussian noise. A generalized Gauss-Newton algorithm is designed to
solve the resulting model by linearizing the domain transformations and a
proximal Gauss-Seidel algorithm is developed to solve the corresponding
subproblem. Furthermore, the convergence of the proximal Gauss-Seidel algorithm
is established, whose convergence rate is also analyzed based on the
Kurdyka-$\L$ojasiewicz property. Extensive numerical experiments on real-world
image datasets are carried out to demonstrate the superior performance of the
proposed method as compared to several state-of-the-art methods in both
accuracy and computational time.
Related papers
- Low-Rank Tensor Learning by Generalized Nonconvex Regularization [25.115066273660478]
We study the problem of low-rank tensor learning, where only a few samples are observed the underlying tensor.
A family of non tensor learning functions are employed to characterize the low-rankness of the underlying tensor.
An algorithm designed to solve the resulting majorization-minimization is proposed.
arXiv Detail & Related papers (2024-10-24T03:33:20Z) - Tensor cumulants for statistical inference on invariant distributions [49.80012009682584]
We show that PCA becomes computationally hard at a critical value of the signal's magnitude.
We define a new set of objects, which provide an explicit, near-orthogonal basis for invariants of a given degree.
It also lets us analyze a new problem of distinguishing between different ensembles.
arXiv Detail & Related papers (2024-04-29T14:33:24Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - Error Analysis of Tensor-Train Cross Approximation [88.83467216606778]
We provide accuracy guarantees in terms of the entire tensor for both exact and noisy measurements.
Results are verified by numerical experiments, and may have important implications for the usefulness of cross approximations for high-order tensors.
arXiv Detail & Related papers (2022-07-09T19:33:59Z) - Understanding Deflation Process in Over-parametrized Tensor
Decomposition [17.28303004783945]
We study the training dynamics for gradient flow on over-parametrized tensor decomposition problems.
Empirically, such training process often first fits larger components and then discovers smaller components.
arXiv Detail & Related papers (2021-06-11T18:51:36Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Self-Supervised Nonlinear Transform-Based Tensor Nuclear Norm for
Multi-Dimensional Image Recovery [27.34643415429293]
We propose a multilayer neural network to learn a nonlinear transform via the observed tensor data under self-supervision.
The proposed network makes use of low-rank representation of transformed tensors and data-fitting between the observed tensor and the reconstructed tensor to construct the nonlinear transformation.
arXiv Detail & Related papers (2021-05-29T14:56:51Z) - Regularization by Denoising Sub-sampled Newton Method for Spectral CT
Multi-Material Decomposition [78.37855832568569]
We propose to solve a model-based maximum-a-posterior problem to reconstruct multi-materials images with application to spectral CT.
In particular, we propose to solve a regularized optimization problem based on a plug-in image-denoising function.
We show numerical and experimental results for spectral CT materials decomposition.
arXiv Detail & Related papers (2021-03-25T15:20:10Z) - Alternating linear scheme in a Bayesian framework for low-rank tensor
approximation [5.833272638548154]
We find a low-rank representation for a given tensor by solving a Bayesian inference problem.
We present an algorithm that performs the unscented transform in tensor train format.
arXiv Detail & Related papers (2020-12-21T10:15:30Z) - Hyperspectral Image Denoising with Partially Orthogonal Matrix Vector
Tensor Factorization [42.56231647066719]
Hyperspectral image (HSI) has some advantages over natural image for various applications due to the extra spectral information.
During the acquisition, it is often contaminated by severe noises including Gaussian noise, impulse noise, deadlines, and stripes.
We present a HSI restoration method named smooth and robust low rank tensor recovery.
arXiv Detail & Related papers (2020-06-29T02:10:07Z) - Generalizing Convolutional Neural Networks for Equivariance to Lie
Groups on Arbitrary Continuous Data [52.78581260260455]
We propose a general method to construct a convolutional layer that is equivariant to transformations from any specified Lie group.
We apply the same model architecture to images, ball-and-stick molecular data, and Hamiltonian dynamical systems.
arXiv Detail & Related papers (2020-02-25T17:40:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.