Non-linear Functional Modeling using Neural Networks
- URL: http://arxiv.org/abs/2104.09371v2
- Date: Thu, 4 May 2023 00:24:14 GMT
- Title: Non-linear Functional Modeling using Neural Networks
- Authors: Aniruddha Rajendra Rao and Matthew Reimherr
- Abstract summary: We introduce a new class of non-linear models for functional data based on neural networks.
We propose two variations of our framework: a functional neural network with continuous hidden layers, and a second version that utilizes basis expansions and continuous hidden layers.
- Score: 6.624726878647541
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a new class of non-linear models for functional data based on
neural networks. Deep learning has been very successful in non-linear modeling,
but there has been little work done in the functional data setting. We propose
two variations of our framework: a functional neural network with continuous
hidden layers, called the Functional Direct Neural Network (FDNN), and a second
version that utilizes basis expansions and continuous hidden layers, called the
Functional Basis Neural Network (FBNN). Both are designed explicitly to exploit
the structure inherent in functional data. To fit these models we derive a
functional gradient based optimization algorithm. The effectiveness of the
proposed methods in handling complex functional models is demonstrated by
comprehensive simulation studies and real data examples.
Related papers
- Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - Functional Neural Networks: Shift invariant models for functional data
with applications to EEG classification [0.0]
We introduce a new class of neural networks that are shift invariant and preserve smoothness of the data: functional neural networks (FNNs)
For this, we use methods from functional data analysis (FDA) to extend multi-layer perceptrons and convolutional neural networks to functional data.
We show that the models outperform a benchmark model from FDA in terms of accuracy and successfully use FNNs to classify electroencephalography (EEG) data.
arXiv Detail & Related papers (2023-01-14T09:41:21Z) - Deep Neural Network Classifier for Multi-dimensional Functional Data [4.340040784481499]
We propose a new approach, called as functional deep neural network (FDNN), for classifying multi-dimensional functional data.
Specifically, a deep neural network is trained based on the principle components of the training data which shall be used to predict the class label of a future data function.
arXiv Detail & Related papers (2022-05-17T19:22:48Z) - Modern Non-Linear Function-on-Function Regression [8.231050911072755]
We introduce a new class of non-linear function-on-function regression models for functional data using neural networks.
We give two model fitting strategies, Functional Direct Neural Network (FDNN) and Functional Basis Neural Network (FBNN)
arXiv Detail & Related papers (2021-07-29T16:19:59Z) - Going Beyond Linear RL: Sample Efficient Neural Function Approximation [76.57464214864756]
We study function approximation with two-layer neural networks.
Our results significantly improve upon what can be attained with linear (or eluder dimension) methods.
arXiv Detail & Related papers (2021-07-14T03:03:56Z) - Data-Driven Learning of Feedforward Neural Networks with Different
Activation Functions [0.0]
This work contributes to the development of a new data-driven method (D-DM) of feedforward neural networks (FNNs) learning.
arXiv Detail & Related papers (2021-07-04T18:20:27Z) - Sparse Flows: Pruning Continuous-depth Models [107.98191032466544]
We show that pruning improves generalization for neural ODEs in generative modeling.
We also show that pruning finds minimal and efficient neural ODE representations with up to 98% less parameters compared to the original network, without loss of accuracy.
arXiv Detail & Related papers (2021-06-24T01:40:17Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Neural Network Structure Design based on N-Gauss Activation Function [0.2578242050187029]
We introduce the core block N-Gauss, N-Gauss, and Swish neural network structure design to train MNIST, CIFAR10, and CIFAR100 respectively.
N-Gauss gives full play to the main role of nonlinear modeling of activation functions, so that deep convolutional neural networks have hierarchical nonlinear mapping learning capabilities.
arXiv Detail & Related papers (2021-06-01T11:16:37Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - Measuring Model Complexity of Neural Networks with Curve Activation
Functions [100.98319505253797]
We propose the linear approximation neural network (LANN) to approximate a given deep model with curve activation function.
We experimentally explore the training process of neural networks and detect overfitting.
We find that the $L1$ and $L2$ regularizations suppress the increase of model complexity.
arXiv Detail & Related papers (2020-06-16T07:38:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.