FastONN -- Python based open-source GPU implementation for Operational
Neural Networks
- URL: http://arxiv.org/abs/2006.02267v1
- Date: Wed, 3 Jun 2020 13:33:35 GMT
- Title: FastONN -- Python based open-source GPU implementation for Operational
Neural Networks
- Authors: Junaid Malik, Serkan Kiranyaz and Moncef Gabbouj
- Abstract summary: This work introduces a fast GPU-enabled library for training operational neural networks, FastONN.
FastONN is based on a novel vectorized formulation of the operational neurons.
bundled auxiliary modules offer interfaces for performance tracking and checkpointing across different data partitions and customized metrics.
- Score: 25.838282412957675
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Operational Neural Networks (ONNs) have recently been proposed as a special
class of artificial neural networks for grid structured data. They enable
heterogenous non-linear operations to generalize the widely adopted
convolution-based neuron model. This work introduces a fast GPU-enabled library
for training operational neural networks, FastONN, which is based on a novel
vectorized formulation of the operational neurons. Leveraging on automatic
reverse-mode differentiation for backpropagation, FastONN enables increased
flexibility with the incorporation of new operator sets and customized gradient
flows. Additionally, bundled auxiliary modules offer interfaces for performance
tracking and checkpointing across different data partitions and customized
metrics.
Related papers
- Simple initialization and parametrization of sinusoidal networks via
their kernel bandwidth [92.25666446274188]
sinusoidal neural networks with activations have been proposed as an alternative to networks with traditional activation functions.
We first propose a simplified version of such sinusoidal neural networks, which allows both for easier practical implementation and simpler theoretical analysis.
We then analyze the behavior of these networks from the neural tangent kernel perspective and demonstrate that their kernel approximates a low-pass filter with an adjustable bandwidth.
arXiv Detail & Related papers (2022-11-26T07:41:48Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - Neural Network Structure Design based on N-Gauss Activation Function [0.2578242050187029]
We introduce the core block N-Gauss, N-Gauss, and Swish neural network structure design to train MNIST, CIFAR10, and CIFAR100 respectively.
N-Gauss gives full play to the main role of nonlinear modeling of activation functions, so that deep convolutional neural networks have hierarchical nonlinear mapping learning capabilities.
arXiv Detail & Related papers (2021-06-01T11:16:37Z) - Contextual HyperNetworks for Novel Feature Adaptation [43.49619456740745]
Contextual HyperNetwork (CHN) generates parameters for extending the base model to a new feature.
At prediction time, the CHN requires only a single forward pass through a neural network, yielding a significant speed-up.
We show that this system obtains improved few-shot learning performance for novel features over existing imputation and meta-learning baselines.
arXiv Detail & Related papers (2021-04-12T23:19:49Z) - Delay Differential Neural Networks [0.2538209532048866]
We propose a novel model, delay differential neural networks (DDNN), inspired by delay differential equations (DDEs)
For training DDNNs, we provide a memory-efficient adjoint method for computing gradients and back-propagate through the network.
Experiments conducted on synthetic and real-world image classification datasets such as Cifar10 and Cifar100 show the effectiveness of the proposed models.
arXiv Detail & Related papers (2020-12-12T12:20:54Z) - Deep Neural Networks using a Single Neuron: Folded-in-Time Architecture
using Feedback-Modulated Delay Loops [0.0]
We present a method for folding a deep neural network of arbitrary size into a single neuron with multiple time-delayed feedback loops.
This single-neuron deep neural network comprises only a single nonlinearity and appropriately adjusted modulations of the feedback signals.
The new method, which we call Folded-in-time DNN (Fit-DNN), exhibits promising performance in a set of benchmark tasks.
arXiv Detail & Related papers (2020-11-19T21:45:58Z) - Exploiting Heterogeneity in Operational Neural Networks by Synaptic
Plasticity [87.32169414230822]
Recently proposed network model, Operational Neural Networks (ONNs), can generalize the conventional Convolutional Neural Networks (CNNs)
In this study the focus is drawn on searching the best-possible operator set(s) for the hidden neurons of the network based on the Synaptic Plasticity paradigm that poses the essential learning theory in biological neurons.
Experimental results over highly challenging problems demonstrate that the elite ONNs even with few neurons and layers can achieve a superior learning performance than GIS-based ONNs.
arXiv Detail & Related papers (2020-08-21T19:03:23Z) - Self-Organized Operational Neural Networks with Generative Neurons [87.32169414230822]
ONNs are heterogenous networks with a generalized neuron model that can encapsulate any set of non-linear operators.
We propose Self-organized ONNs (Self-ONNs) with generative neurons that have the ability to adapt (optimize) the nodal operator of each connection.
arXiv Detail & Related papers (2020-04-24T14:37:56Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.