Volterra Neural Networks (VNNs)
- URL: http://arxiv.org/abs/1910.09616v5
- Date: Thu, 15 Jun 2023 16:06:40 GMT
- Title: Volterra Neural Networks (VNNs)
- Authors: Siddharth Roheda, Hamid Krim
- Abstract summary: We propose a Volterra filter-inspired Network architecture to reduce the complexity of Convolutional Neural Networks.
We show an efficient parallel implementation of this Volterra Neural Network (VNN) along with its remarkable performance.
The proposed approach is evaluated on UCF-101 and HMDB-51 datasets for action recognition, and is shown to outperform state of the art CNN approaches.
- Score: 24.12314339259243
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The importance of inference in Machine Learning (ML) has led to an explosive
number of different proposals in ML, and particularly in Deep Learning. In an
attempt to reduce the complexity of Convolutional Neural Networks, we propose a
Volterra filter-inspired Network architecture. This architecture introduces
controlled non-linearities in the form of interactions between the delayed
input samples of data. We propose a cascaded implementation of Volterra
Filtering so as to significantly reduce the number of parameters required to
carry out the same classification task as that of a conventional Neural
Network. We demonstrate an efficient parallel implementation of this Volterra
Neural Network (VNN), along with its remarkable performance while retaining a
relatively simpler and potentially more tractable structure. Furthermore, we
show a rather sophisticated adaptation of this network to nonlinearly fuse the
RGB (spatial) information and the Optical Flow (temporal) information of a
video sequence for action recognition. The proposed approach is evaluated on
UCF-101 and HMDB-51 datasets for action recognition, and is shown to outperform
state of the art CNN approaches.
Related papers
- TCCT-Net: Two-Stream Network Architecture for Fast and Efficient Engagement Estimation via Behavioral Feature Signals [58.865901821451295]
We present a novel two-stream feature fusion "Tensor-Convolution and Convolution-Transformer Network" (TCCT-Net) architecture.
To better learn the meaningful patterns in the temporal-spatial domain, we design a "CT" stream that integrates a hybrid convolutional-transformer.
In parallel, to efficiently extract rich patterns from the temporal-frequency domain, we introduce a "TC" stream that uses Continuous Wavelet Transform (CWT) to represent information in a 2D tensor form.
arXiv Detail & Related papers (2024-04-15T06:01:48Z) - Brain-on-Switch: Towards Advanced Intelligent Network Data Plane via NN-Driven Traffic Analysis at Line-Speed [33.455302442142994]
programmable networks sparked significant research on Intelligent Network Data Plane (INDP), which achieves learning-based traffic analysis at line-speed.
Prior art in INDP focus on deploying tree/forest models on the data plane.
We present BoS to push the boundaries of INDP by enabling Neural Network (NN) driven traffic analysis at line-speed.
arXiv Detail & Related papers (2024-03-17T04:59:30Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - Ghost-dil-NetVLAD: A Lightweight Neural Network for Visual Place Recognition [3.6249801498927923]
We propose a lightweight weakly supervised end-to-end neural network consisting of a front-ended perception model called GhostCNN and a learnable VLAD layer as a back-end.
To enhance our proposed lightweight model further, we add dilated convolutions to the Ghost module to get features containing more spatial semantic information, improving accuracy.
arXiv Detail & Related papers (2021-12-22T06:05:02Z) - CondenseNeXt: An Ultra-Efficient Deep Neural Network for Embedded
Systems [0.0]
A Convolutional Neural Network (CNN) is a class of Deep Neural Network (DNN) widely used in the analysis of visual images captured by an image sensor.
In this paper, we propose a neoteric variant of deep convolutional neural network architecture to ameliorate the performance of existing CNN architectures for real-time inference on embedded systems.
arXiv Detail & Related papers (2021-12-01T18:20:52Z) - A novel Deep Neural Network architecture for non-linear system
identification [78.69776924618505]
We present a novel Deep Neural Network (DNN) architecture for non-linear system identification.
Inspired by fading memory systems, we introduce inductive bias (on the architecture) and regularization (on the loss function)
This architecture allows for automatic complexity selection based solely on available data.
arXiv Detail & Related papers (2021-06-06T10:06:07Z) - Latent Code-Based Fusion: A Volterra Neural Network Approach [21.25021807184103]
We propose a deep structure encoder using the recently introduced Volterra Neural Networks (VNNs)
We show that the proposed approach demonstrates a much-improved sample complexity over CNN-based auto-encoder with a superb robust classification performance.
arXiv Detail & Related papers (2021-04-10T18:29:01Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - Attentional Local Contrast Networks for Infrared Small Target Detection [15.882749652217653]
We propose a novel model-driven deep network for infrared small target detection.
We modularize a conventional local contrast measure method as a depth-wise parameterless nonlinear feature refinement layer in an end-to-end network.
We conduct detailed ablation studies with varying network depths to empirically verify the effectiveness and efficiency of each component in our network architecture.
arXiv Detail & Related papers (2020-12-15T19:33:09Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.