Efficient Structure-preserving Support Tensor Train Machine
- URL: http://arxiv.org/abs/2002.05079v3
- Date: Tue, 3 Aug 2021 16:57:36 GMT
- Title: Efficient Structure-preserving Support Tensor Train Machine
- Authors: Kirandeep Kour, Sergey Dolgov, Martin Stoll and Peter Benner
- Abstract summary: Train Multi-way Multi-level Kernel (TT-MMK)
We develop the Train Multi-way Multi-level Kernel (TT-MMK), which combines the simplicity of the Polyadic decomposition, the classification power of the Dual Structure-preserving Support Machine, and the reliability of the Train Vector approximation.
We show by experiments that the TT-MMK method is usually more reliable, less sensitive to tuning parameters, and gives higher prediction accuracy in the SVM classification when benchmarked against other state-of-the-art techniques.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An increasing amount of collected data are high-dimensional multi-way arrays
(tensors), and it is crucial for efficient learning algorithms to exploit this
tensorial structure as much as possible. The ever-present curse of
dimensionality for high dimensional data and the loss of structure when
vectorizing the data motivates the use of tailored low-rank tensor
classification methods. In the presence of small amounts of training data,
kernel methods offer an attractive choice as they provide the possibility for a
nonlinear decision boundary. We develop the Tensor Train Multi-way Multi-level
Kernel (TT-MMK), which combines the simplicity of the Canonical Polyadic
decomposition, the classification power of the Dual Structure-preserving
Support Vector Machine, and the reliability of the Tensor Train (TT)
approximation. We show by experiments that the TT-MMK method is usually more
reliable computationally, less sensitive to tuning parameters, and gives higher
prediction accuracy in the SVM classification when benchmarked against other
state-of-the-art techniques.
Related papers
- Fast and Provable Tensor Robust Principal Component Analysis via Scaled
Gradient Descent [30.299284742925852]
This paper tackles tensor robust principal component analysis (RPCA)
It aims to recover a low-rank tensor from its observations contaminated by sparse corruptions.
We show that the proposed algorithm achieves better and more scalable performance than state-of-the-art matrix and tensor RPCA algorithms.
arXiv Detail & Related papers (2022-06-18T04:01:32Z) - Truncated tensor Schatten p-norm based approach for spatiotemporal
traffic data imputation with complicated missing patterns [77.34726150561087]
We introduce four complicated missing patterns, including missing and three fiber-like missing cases according to the mode-drivenn fibers.
Despite nonity of the objective function in our model, we derive the optimal solutions by integrating alternating data-mputation method of multipliers.
arXiv Detail & Related papers (2022-05-19T08:37:56Z) - Efficient Cluster-Based k-Nearest-Neighbor Machine Translation [65.69742565855395]
k-Nearest-Neighbor Machine Translation (kNN-MT) has been recently proposed as a non-parametric solution for domain adaptation in neural machine translation (NMT)
arXiv Detail & Related papers (2022-04-13T05:46:31Z) - Weight Vector Tuning and Asymptotic Analysis of Binary Linear
Classifiers [82.5915112474988]
This paper proposes weight vector tuning of a generic binary linear classifier through the parameterization of a decomposition of the discriminant by a scalar.
It is also found that weight vector tuning significantly improves the performance of Linear Discriminant Analysis (LDA) under high estimation noise.
arXiv Detail & Related papers (2021-10-01T17:50:46Z) - Tensor Full Feature Measure and Its Nonconvex Relaxation Applications to
Tensor Recovery [1.8899300124593645]
We propose a new tensor sparsity measure called Full Feature Measure (FFM)
It can simultaneously describe the feature dimension each dimension, and connect the Tucker rank with the tensor tube rank.
Two efficient models based on FFM are proposed, and two Alternating Multiplier Method (ADMM) algorithms are developed to solve the proposed model.
arXiv Detail & Related papers (2021-09-25T01:44:34Z) - MLCTR: A Fast Scalable Coupled Tensor Completion Based on Multi-Layer
Non-Linear Matrix Factorization [3.6978630614152013]
This paper focuses on the embedding learning aspect of the tensor completion problem and proposes a new multi-layer neural network architecture for factorization and completion (MLCTR)
The network architecture entails multiple advantages: a series of low-rank matrix factorizations building blocks to minimize overfitting, interleaved transfer functions in each layer for non-linearity, and by-pass connections to reduce diminishing problem and increase depths of networks.
Our algorithm is highly efficient for imputing missing values in the EPS data.
arXiv Detail & Related papers (2021-09-04T03:08:34Z) - Semi-orthogonal Embedding for Efficient Unsupervised Anomaly
Segmentation [6.135577623169028]
We generalize an ad-hoc method, random feature selection, into semi-orthogonal embedding for robust approximation.
With the scrutiny of ablation studies, the proposed method achieves a new state-of-the-art with significant margins for the MVTec AD, KolektorSDD, KolektorSDD2, and mSTC datasets.
arXiv Detail & Related papers (2021-05-31T07:02:20Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - Spectral Tensor Train Parameterization of Deep Learning Layers [136.4761580842396]
We study low-rank parameterizations of weight matrices with embedded spectral properties in the Deep Learning context.
We show the effects of neural network compression in the classification setting and both compression and improved stability training in the generative adversarial training setting.
arXiv Detail & Related papers (2021-03-07T00:15:44Z) - Supervised Learning for Non-Sequential Data: A Canonical Polyadic
Decomposition Approach [85.12934750565971]
Efficient modelling of feature interactions underpins supervised learning for non-sequential tasks.
To alleviate this issue, it has been proposed to implicitly represent the model parameters as a tensor.
For enhanced expressiveness, we generalize the framework to allow feature mapping to arbitrarily high-dimensional feature vectors.
arXiv Detail & Related papers (2020-01-27T22:38:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.