JULIA: Joint Multi-linear and Nonlinear Identification for Tensor
Completion
- URL: http://arxiv.org/abs/2202.00071v1
- Date: Mon, 31 Jan 2022 20:18:41 GMT
- Title: JULIA: Joint Multi-linear and Nonlinear Identification for Tensor
Completion
- Authors: Cheng Qian, Kejun Huang, Lucas Glass, Rakshith S. Srinivasa, and
Jimeng Sun
- Abstract summary: This paper proposes a Joint mUlti-linear and non Linear IdentificAtion framework for large-scale tensor completion.
Experiments on six real large-scale tensors demonstrate that JULIA outperforms many existing tensor completion algorithms.
- Score: 46.27248186328502
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tensor completion aims at imputing missing entries from a partially observed
tensor. Existing tensor completion methods often assume either multi-linear or
nonlinear relationships between latent components.
However, real-world tensors have much more complex patterns where both
multi-linear and nonlinear relationships may coexist. In such cases, the
existing methods are insufficient to describe the data structure. This paper
proposes a Joint mUlti-linear and nonLinear IdentificAtion (JULIA) framework
for large-scale tensor completion. JULIA unifies the multi-linear and nonlinear
tensor completion models with several advantages over the existing methods: 1)
Flexible model selection, i.e., it fits a tensor by assigning its values as a
combination of multi-linear and nonlinear components; 2) Compatible with
existing nonlinear tensor completion methods; 3) Efficient training based on a
well-designed alternating optimization approach. Experiments on six real
large-scale tensors demonstrate that JULIA outperforms many existing tensor
completion algorithms. Furthermore, JULIA can improve the performance of a
class of nonlinear tensor completion methods. The results show that in some
large-scale tensor completion scenarios, baseline methods with JULIA are able
to obtain up to 55% lower root mean-squared-error and save 67% computational
complexity.
Related papers
- Scalable tensor methods for nonuniform hypergraphs [0.18434042562191813]
A recently proposed adjacency tensor is applicable to nonuniform hypergraphs, but is prohibitively costly to form and analyze in practice.
We develop tensor times same vector (TTSV) algorithms which improve complexity from $O(nr)$ to a low-degree in $r$.
We demonstrate the flexibility and utility of our approach in practice by developing tensor-based hypergraph centrality and clustering algorithms.
arXiv Detail & Related papers (2023-06-30T17:41:58Z) - Low-Rank Tensor Function Representation for Multi-Dimensional Data
Recovery [52.21846313876592]
Low-rank tensor function representation (LRTFR) can continuously represent data beyond meshgrid with infinite resolution.
We develop two fundamental concepts for tensor functions, i.e., the tensor function rank and low-rank tensor function factorization.
Our method substantiates the superiority and versatility of our method as compared with state-of-the-art methods.
arXiv Detail & Related papers (2022-12-01T04:00:38Z) - Slimmable Networks for Contrastive Self-supervised Learning [69.9454691873866]
Self-supervised learning makes significant progress in pre-training large models, but struggles with small models.
We introduce another one-stage solution to obtain pre-trained small models without the need for extra teachers.
A slimmable network consists of a full network and several weight-sharing sub-networks, which can be pre-trained once to obtain various networks.
arXiv Detail & Related papers (2022-09-30T15:15:05Z) - Error Analysis of Tensor-Train Cross Approximation [88.83467216606778]
We provide accuracy guarantees in terms of the entire tensor for both exact and noisy measurements.
Results are verified by numerical experiments, and may have important implications for the usefulness of cross approximations for high-order tensors.
arXiv Detail & Related papers (2022-07-09T19:33:59Z) - Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness [172.61581010141978]
Certifiable robustness is a desirable property for adopting deep neural networks (DNNs) in safety-critical scenarios.
We propose a novel solution to strategically manipulate neurons, by "grafting" appropriate levels of linearity.
arXiv Detail & Related papers (2022-06-15T22:42:29Z) - Tensor Full Feature Measure and Its Nonconvex Relaxation Applications to
Tensor Recovery [1.8899300124593645]
We propose a new tensor sparsity measure called Full Feature Measure (FFM)
It can simultaneously describe the feature dimension each dimension, and connect the Tucker rank with the tensor tube rank.
Two efficient models based on FFM are proposed, and two Alternating Multiplier Method (ADMM) algorithms are developed to solve the proposed model.
arXiv Detail & Related papers (2021-09-25T01:44:34Z) - Efficient Tensor Completion via Element-wise Weighted Low-rank Tensor
Train with Overlapping Ket Augmentation [18.438177637687357]
We propose a novel tensor completion approach via the element-wise weighted technique.
We specifically consider the recovery quality of edge elements from adjacent blocks.
Our experimental results demonstrate that the proposed algorithm TWMac-TT outperforms several other competing tensor completion methods.
arXiv Detail & Related papers (2021-09-13T06:50:37Z) - MLCTR: A Fast Scalable Coupled Tensor Completion Based on Multi-Layer
Non-Linear Matrix Factorization [3.6978630614152013]
This paper focuses on the embedding learning aspect of the tensor completion problem and proposes a new multi-layer neural network architecture for factorization and completion (MLCTR)
The network architecture entails multiple advantages: a series of low-rank matrix factorizations building blocks to minimize overfitting, interleaved transfer functions in each layer for non-linearity, and by-pass connections to reduce diminishing problem and increase depths of networks.
Our algorithm is highly efficient for imputing missing values in the EPS data.
arXiv Detail & Related papers (2021-09-04T03:08:34Z) - MTC: Multiresolution Tensor Completion from Partial and Coarse
Observations [49.931849672492305]
Existing completion formulation mostly relies on partial observations from a single tensor.
We propose an efficient Multi-resolution Completion model (MTC) to solve the problem.
arXiv Detail & Related papers (2021-06-14T02:20:03Z) - A Solution for Large Scale Nonlinear Regression with High Rank and
Degree at Constant Memory Complexity via Latent Tensor Reconstruction [0.0]
This paper proposes a novel method for learning highly nonlinear, multivariate functions from examples.
Our method takes advantage of the property that continuous functions can be approximated by bys, which in turn are representable by tensors.
For learning the models, we present an efficient-based algorithm that can be implemented in linear time.
arXiv Detail & Related papers (2020-05-04T14:49:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.