Multiclass classification for multidimensional functional data through
deep neural networks
- URL: http://arxiv.org/abs/2305.13349v2
- Date: Wed, 24 May 2023 03:02:42 GMT
- Title: Multiclass classification for multidimensional functional data through
deep neural networks
- Authors: Shuoyang Wang, Guanqun Cao
- Abstract summary: We introduce a novel functional deep neural network (mfDNN) as an innovative data mining classification tool.
We consider sparse deep neural network architecture with linear unit (ReLU) activation function and minimize the cross-entropy loss in the multiclass classification setup.
We demonstrate the performance of mfDNN on simulated data and several benchmark datasets from different application domains.
- Score: 0.22843885788439797
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The intrinsically infinite-dimensional features of the functional
observations over multidimensional domains render the standard classification
methods effectively inapplicable. To address this problem, we introduce a novel
multiclass functional deep neural network (mfDNN) classifier as an innovative
data mining and classification tool. Specifically, we consider sparse deep
neural network architecture with rectifier linear unit (ReLU) activation
function and minimize the cross-entropy loss in the multiclass classification
setup. This neural network architecture allows us to employ modern
computational tools in the implementation. The convergence rates of the
misclassification risk functions are also derived for both fully observed and
discretely observed multidimensional functional data. We demonstrate the
performance of mfDNN on simulated data and several benchmark datasets from
different application domains.
Related papers
- Toward Multi-class Anomaly Detection: Exploring Class-aware Unified Model against Inter-class Interference [67.36605226797887]
We introduce a Multi-class Implicit Neural representation Transformer for unified Anomaly Detection (MINT-AD)
By learning the multi-class distributions, the model generates class-aware query embeddings for the transformer decoder.
MINT-AD can project category and position information into a feature embedding space, further supervised by classification and prior probability loss functions.
arXiv Detail & Related papers (2024-03-21T08:08:31Z) - Hidden Classification Layers: Enhancing linear separability between
classes in neural networks layers [0.0]
We investigate the impact on deep network performances of a training approach.
We propose a neural network architecture which induces an error function involving the outputs of all the network layers.
arXiv Detail & Related papers (2023-06-09T10:52:49Z) - Deep Neural Network Classifier for Multi-dimensional Functional Data [4.340040784481499]
We propose a new approach, called as functional deep neural network (FDNN), for classifying multi-dimensional functional data.
Specifically, a deep neural network is trained based on the principle components of the training data which shall be used to predict the class label of a future data function.
arXiv Detail & Related papers (2022-05-17T19:22:48Z) - On Feature Learning in Neural Networks with Global Convergence
Guarantees [49.870593940818715]
We study the optimization of wide neural networks (NNs) via gradient flow (GF)
We show that when the input dimension is no less than the size of the training set, the training loss converges to zero at a linear rate under GF.
We also show empirically that, unlike in the Neural Tangent Kernel (NTK) regime, our multi-layer model exhibits feature learning and can achieve better generalization performance than its NTK counterpart.
arXiv Detail & Related papers (2022-04-22T15:56:43Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - A novel multi-scale loss function for classification problems in machine
learning [0.0]
We introduce two-scale loss functions for use in various gradient descent algorithms applied to classification problems via deep neural networks.
These two-scale loss functions allow to focus the training onto objects in the training set which are not well classified.
arXiv Detail & Related papers (2021-06-04T19:11:11Z) - Non-asymptotic Excess Risk Bounds for Classification with Deep
Convolutional Neural Networks [6.051520664893158]
We consider the problem of binary classification with a class of general deep convolutional neural networks.
We define the prefactors of the risk bounds in terms of the input data dimension and other model parameters.
We show that the classification methods with CNNs can circumvent the curse of dimensionality.
arXiv Detail & Related papers (2021-05-01T15:55:04Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - Random Features for the Neural Tangent Kernel [57.132634274795066]
We propose an efficient feature map construction of the Neural Tangent Kernel (NTK) of fully-connected ReLU network.
We show that dimension of the resulting features is much smaller than other baseline feature map constructions to achieve comparable error bounds both in theory and practice.
arXiv Detail & Related papers (2021-04-03T09:08:12Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.