Fully-Connected Tensor Network Decomposition for Robust Tensor
Completion Problem
- URL: http://arxiv.org/abs/2110.08754v1
- Date: Sun, 17 Oct 2021 08:12:50 GMT
- Title: Fully-Connected Tensor Network Decomposition for Robust Tensor
Completion Problem
- Authors: Yun-Yang Liu, Xi-Le Zhao, Guang-Jing Song, Yu-Bang Zheng, Ting-Zhu
Huang
- Abstract summary: We propose a $textbfFCTN$-based $textbfr$obust $textbfc$onvex optimization model (RC-FCTN) for the RTC problem.
For solving the constrained optimization model RC-FCTN, we develop an alternating direction method of multipliers (ADMM)-based algorithm.
A proximal alternating minimization (PAM)-based algorithm is developed to solve the proposed RNC-FCTN.
- Score: 9.580645211308557
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The robust tensor completion (RTC) problem, which aims to reconstruct a
low-rank tensor from partially observed tensor contaminated by a sparse tensor,
has received increasing attention. In this paper, by leveraging the superior
expression of the fully-connected tensor network (FCTN) decomposition, we
propose a $\textbf{FCTN}$-based $\textbf{r}$obust $\textbf{c}$onvex
optimization model (RC-FCTN) for the RTC problem. Then, we rigorously establish
the exact recovery guarantee for the RC-FCTN. For solving the constrained
optimization model RC-FCTN, we develop an alternating direction method of
multipliers (ADMM)-based algorithm, which enjoys the global convergence
guarantee. Moreover, we suggest a $\textbf{FCTN}$-based $\textbf{r}$obust
$\textbf{n}$on$\textbf{c}$onvex optimization model (RNC-FCTN) for the RTC
problem. A proximal alternating minimization (PAM)-based algorithm is developed
to solve the proposed RNC-FCTN. Meanwhile, we theoretically derive the
convergence of the PAM-based algorithm. Comprehensive numerical experiments in
several applications, such as video completion and video background
subtraction, demonstrate that proposed methods are superior to several
state-of-the-art methods.
Related papers
- Computational and Statistical Guarantees for Tensor-on-Tensor Regression with Tensor Train Decomposition [27.29463801531576]
We study the theoretical and algorithmic aspects of the TT-based ToT regression model.
We propose two algorithms to efficiently find solutions to constrained error bounds.
We establish the linear convergence rate of both IHT and RGD.
arXiv Detail & Related papers (2024-06-10T03:51:38Z) - Global Convergence of Decentralized Retraction-Free Optimization on the Stiefel Manifold [12.414718831844041]
We show that DRFGT performs retraction on a gradient based on the corresponding DRFGT method.
Also show that DRFGT can be used to perform retraction on a network of agents.
arXiv Detail & Related papers (2024-05-19T15:50:57Z) - ALEXR: An Optimal Single-Loop Algorithm for Convex Finite-Sum Coupled Compositional Stochastic Optimization [53.14532968909759]
We introduce an efficient single-loop primal-dual block-coordinate algorithm, dubbed ALEXR.
We establish the convergence rates of ALEXR in both convex and strongly convex cases under smoothness and non-smoothness conditions.
We present lower complexity bounds to demonstrate that the convergence rates of ALEXR are optimal among first-order block-coordinate algorithms for the considered class of cFCCO problems.
arXiv Detail & Related papers (2023-12-04T19:00:07Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - Efficient Dataset Distillation Using Random Feature Approximation [109.07737733329019]
We propose a novel algorithm that uses a random feature approximation (RFA) of the Neural Network Gaussian Process (NNGP) kernel.
Our algorithm provides at least a 100-fold speedup over KIP and can run on a single GPU.
Our new method, termed an RFA Distillation (RFAD), performs competitively with KIP and other dataset condensation algorithms in accuracy over a range of large-scale datasets.
arXiv Detail & Related papers (2022-10-21T15:56:13Z) - Latent Matrices for Tensor Network Decomposition and to Tensor
Completion [8.301418317685906]
We propose a novel higher-order tensor decomposition model that decomposes the tensor into smaller ones and speeds up the computation of the algorithm.
Three optimization algorithms, LMTN-PAM, LMTN-SVD and LMTN-AR, have been developed and applied to the tensor-completion task.
Experimental results show that our LMTN-SVD algorithm is 3-6 times faster than the FCTN-PAM algorithm and only a 1.8 points accuracy drop.
arXiv Detail & Related papers (2022-10-07T08:19:50Z) - Nonlocal Patch-Based Fully-Connected Tensor Network Decomposition for
Remote Sensing Image Inpainting [5.81423357257872]
This paper introduces the FCTN decomposition to the whole RSI and its NSS groups, and proposes a novel nonlocal patch-based decomposition for RSI inpainting.
The proposed method achieves the state-of-the-art inpainting performance in all compared methods.
arXiv Detail & Related papers (2021-09-13T11:49:29Z) - Multi-Tensor Network Representation for High-Order Tensor Completion [25.759851542474447]
This work studies the problem of high-dimensional data (referred to tensors) completion from partially observed samplings.
We consider that a tensor is a superposition of multiple low-rank components.
In this paper, we propose a fundamental tensor decomposition framework: Multi-Tensor Network decomposition (MTNR)
arXiv Detail & Related papers (2021-09-09T03:50:19Z) - Lattice partition recovery with dyadic CART [79.96359947166592]
We study piece-wise constant signals corrupted by additive Gaussian noise over a $d$-dimensional lattice.
Data of this form naturally arise in a host of applications, and the tasks of signal detection or testing, de-noising and estimation have been studied extensively in the statistical and signal processing literature.
In this paper we consider instead the problem of partition recovery, i.e.of estimating the partition of the lattice induced by the constancy regions of the unknown signal.
We prove that a DCART-based procedure consistently estimates the underlying partition at a rate of order $sigma2 k*
arXiv Detail & Related papers (2021-05-27T23:41:01Z) - Stochastic Flows and Geometric Optimization on the Orthogonal Group [52.50121190744979]
We present a new class of geometrically-driven optimization algorithms on the orthogonal group $O(d)$.
We show that our methods can be applied in various fields of machine learning including deep, convolutional and recurrent neural networks, reinforcement learning, flows and metric learning.
arXiv Detail & Related papers (2020-03-30T15:37:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.