Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification
- URL: http://arxiv.org/abs/2104.05048v1
- Date: Sun, 11 Apr 2021 16:37:32 GMT
- Title: Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification
- Authors: Konstantinos Makantasis, Alexandros Georgogiannis, Athanasios
Voulodimos, Ioannis Georgoulas, Anastasios Doulamis, Nikolaos Doulamis
- Abstract summary: Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
- Score: 69.26747803963907
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: An increasing number of emerging applications in data science and engineering
are based on multidimensional and structurally rich data. The irregularities,
however, of high-dimensional data often compromise the effectiveness of
standard machine learning algorithms. We hereby propose the Rank-R Feedforward
Neural Network (FNN), a tensor-based nonlinear learning model that imposes
Canonical/Polyadic decomposition on its parameters, thereby offering two core
advantages compared to typical machine learning methods. First, it handles
inputs as multilinear arrays, bypassing the need for vectorization, and can
thus fully exploit the structural information along every data dimension.
Moreover, the number of the model's trainable parameters is substantially
reduced, making it very efficient for small sample setting problems. We
establish the universal approximation and learnability properties of Rank-R
FNN, and we validate its performance on real-world hyperspectral datasets.
Experimental evaluations show that Rank-R FNN is a computationally inexpensive
alternative of ordinary FNN that achieves state-of-the-art performance on
higher-order tensor data.
Related papers
- Qudit Machine Learning [0.0]
We present a comprehensive investigation into the learning capabilities of a simple d-level system (qudit)
Our study is specialized for classification tasks using real-world databases, specifically the Iris, breast cancer, and MNIST datasets.
arXiv Detail & Related papers (2023-08-30T18:00:04Z) - Multiclass classification for multidimensional functional data through
deep neural networks [0.22843885788439797]
We introduce a novel functional deep neural network (mfDNN) as an innovative data mining classification tool.
We consider sparse deep neural network architecture with linear unit (ReLU) activation function and minimize the cross-entropy loss in the multiclass classification setup.
We demonstrate the performance of mfDNN on simulated data and several benchmark datasets from different application domains.
arXiv Detail & Related papers (2023-05-22T16:56:01Z) - Online Evolutionary Neural Architecture Search for Multivariate
Non-Stationary Time Series Forecasting [72.89994745876086]
This work presents the Online Neuro-Evolution-based Neural Architecture Search (ONE-NAS) algorithm.
ONE-NAS is a novel neural architecture search method capable of automatically designing and dynamically training recurrent neural networks (RNNs) for online forecasting tasks.
Results demonstrate that ONE-NAS outperforms traditional statistical time series forecasting methods.
arXiv Detail & Related papers (2023-02-20T22:25:47Z) - Truncated tensor Schatten p-norm based approach for spatiotemporal
traffic data imputation with complicated missing patterns [77.34726150561087]
We introduce four complicated missing patterns, including missing and three fiber-like missing cases according to the mode-drivenn fibers.
Despite nonity of the objective function in our model, we derive the optimal solutions by integrating alternating data-mputation method of multipliers.
arXiv Detail & Related papers (2022-05-19T08:37:56Z) - Deep Neural Network Classifier for Multi-dimensional Functional Data [4.340040784481499]
We propose a new approach, called as functional deep neural network (FDNN), for classifying multi-dimensional functional data.
Specifically, a deep neural network is trained based on the principle components of the training data which shall be used to predict the class label of a future data function.
arXiv Detail & Related papers (2022-05-17T19:22:48Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - A Meta-Learning Approach to the Optimal Power Flow Problem Under
Topology Reconfigurations [69.73803123972297]
We propose a DNN-based OPF predictor that is trained using a meta-learning (MTL) approach.
The developed OPF-predictor is validated through simulations using benchmark IEEE bus systems.
arXiv Detail & Related papers (2020-12-21T17:39:51Z) - An Online Learning Algorithm for a Neuro-Fuzzy Classifier with
Mixed-Attribute Data [9.061408029414455]
General fuzzy min-max neural network (GFMMNN) is one of the efficient neuro-fuzzy systems for data classification.
This paper proposes an extended online learning algorithm for the GFMMNN.
The proposed method can handle the datasets with both continuous and categorical features.
arXiv Detail & Related papers (2020-09-30T13:45:36Z) - Fast Learning of Graph Neural Networks with Guaranteed Generalizability:
One-hidden-layer Case [93.37576644429578]
Graph neural networks (GNNs) have made great progress recently on learning from graph-structured data in practice.
We provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems.
arXiv Detail & Related papers (2020-06-25T00:45:52Z) - A Neural Network Approach for Online Nonlinear Neyman-Pearson
Classification [3.6144103736375857]
We propose a novel Neyman-Pearson (NP) classifier that is both online and nonlinear as the first time in the literature.
The proposed classifier operates on a binary labeled data stream in an online manner, and maximizes the detection power about a user-specified and controllable false positive rate.
Our algorithm is appropriate for large scale data applications and provides a decent false positive rate controllability with real time processing.
arXiv Detail & Related papers (2020-06-14T20:00:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.