FuncNN: An R Package to Fit Deep Neural Networks Using Generalized Input
Spaces
- URL: http://arxiv.org/abs/2009.09111v2
- Date: Tue, 22 Sep 2020 04:41:36 GMT
- Title: FuncNN: An R Package to Fit Deep Neural Networks Using Generalized Input
Spaces
- Authors: Barinder Thind, Sidi Wu, Richard Groenewald, Jiguo Cao
- Abstract summary: The functional neural network (FuncNN) library is the first such package in any programming language.
This paper introduces functions that provide users an avenue to easily build models, generate predictions, and run cross-validations.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks have excelled at regression and classification problems when
the input space consists of scalar variables. As a result of this proficiency,
several popular packages have been developed that allow users to easily fit
these kinds of models. However, the methodology has excluded the use of
functional covariates and to date, there exists no software that allows users
to build deep learning models with this generalized input space. To the best of
our knowledge, the functional neural network (FuncNN) library is the first such
package in any programming language; the library has been developed for R and
is built on top of the keras architecture. Throughout this paper, several
functions are introduced that provide users an avenue to easily build models,
generate predictions, and run cross-validations. A summary of the underlying
methodology is also presented. The ultimate contribution is a package that
provides a set of general modelling and diagnostic tools for data problems in
which there exist both functional and scalar covariates.
Related papers
- cito: An R package for training neural networks using torch [0.0]
'cito' is a user-friendly R package for deep learning (DL) applications.
It allows specifying DNNs in the familiar formula syntax used by many R packages.
'cito' includes many user-friendly functions for model plotting and analysis.
arXiv Detail & Related papers (2023-03-16T18:54:20Z) - Equivariance with Learned Canonicalization Functions [77.32483958400282]
We show that learning a small neural network to perform canonicalization is better than using predefineds.
Our experiments show that learning the canonicalization function is competitive with existing techniques for learning equivariant functions across many tasks.
arXiv Detail & Related papers (2022-11-11T21:58:15Z) - Neural Attentive Circuits [93.95502541529115]
We introduce a general purpose, yet modular neural architecture called Neural Attentive Circuits (NACs)
NACs learn the parameterization and a sparse connectivity of neural modules without using domain knowledge.
NACs achieve an 8x speedup at inference time while losing less than 3% performance.
arXiv Detail & Related papers (2022-10-14T18:00:07Z) - Deep Learning for Functional Data Analysis with Adaptive Basis Layers [11.831982475316641]
We introduce neural networks that employ a new Basis Layer whose hidden units are each basis functions themselves implemented as a micro neural network.
Our architecture learns to apply parsimonious dimension reduction to functional inputs that focuses only on information relevant to the target rather than irrelevant variation in the input function.
arXiv Detail & Related papers (2021-06-19T04:05:13Z) - Understanding Neural Code Intelligence Through Program Simplification [3.9704927572880253]
We propose a model-agnostic approach to identify critical input features for models in code intelligence systems.
Our approach, SIVAND, uses simplification techniques that reduce the size of input programs of a CI model.
We believe that SIVAND's extracted features may help understand neural CI systems' predictions and learned behavior.
arXiv Detail & Related papers (2021-06-07T05:44:29Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - Captum: A unified and generic model interpretability library for PyTorch [49.72749684393332]
We introduce a novel, unified, open-source model interpretability library for PyTorch.
The library contains generic implementations of a number of gradient and perturbation-based attribution algorithms.
It can be used for both classification and non-classification models.
arXiv Detail & Related papers (2020-09-16T18:57:57Z) - Deep Learning with Functional Inputs [0.0]
We present a methodology for integrating functional data into feed-forward neural networks.
A by-product of the method is a set of dynamic functional weights that can be visualized during the optimization process.
The model is shown to perform well in a number of contexts including prediction of new data and recovery of the true underlying functional weights.
arXiv Detail & Related papers (2020-06-17T01:23:00Z) - Particle-Gibbs Sampling For Bayesian Feature Allocation Models [77.57285768500225]
Most widely used MCMC strategies rely on an element wise Gibbs update of the feature allocation matrix.
We have developed a Gibbs sampler that can update an entire row of the feature allocation matrix in a single move.
This sampler is impractical for models with a large number of features as the computational complexity scales exponentially in the number of features.
We develop a Particle Gibbs sampler that targets the same distribution as the row wise Gibbs updates, but has computational complexity that only grows linearly in the number of features.
arXiv Detail & Related papers (2020-01-25T22:11:51Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.