Event Neural Networks
- URL: http://arxiv.org/abs/2112.00891v1
- Date: Thu, 2 Dec 2021 00:08:48 GMT
- Title: Event Neural Networks
- Authors: Matthew Dutson, Mohit Gupta
- Abstract summary: Event Neural Networks (EvNets) leverage repetition to achieve considerable savings for video inference tasks.
We show that it is possible to transform virtually any conventional neural into an EvNet.
We demonstrate the effectiveness of our method on several state-of-the-art neural networks for both high- and low-level visual processing.
- Score: 13.207573300016277
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Video data is often repetitive; for example, the content of adjacent frames
is usually strongly correlated. Such repetition occurs at multiple levels of
complexity, from low-level pixel values to textures and high-level semantics.
We propose Event Neural Networks (EvNets), a novel class of networks that
leverage this repetition to achieve considerable computation savings for video
inference tasks. A defining characteristic of EvNets is that each neuron has
state variables that provide it with long-term memory, which allows low-cost
inference even in the presence of significant camera motion. We show that it is
possible to transform virtually any conventional neural into an EvNet. We
demonstrate the effectiveness of our method on several state-of-the-art neural
networks for both high- and low-level visual processing, including pose
recognition, object detection, optical flow, and image enhancement. We observe
up to an order-of-magnitude reduction in computational costs (2-20x) as
compared to conventional networks, with minimal reductions in model accuracy.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - Online Hybrid Lightweight Representations Learning: Its Application to
Visual Tracking [42.49852446519412]
This paper presents a novel hybrid representation learning framework for streaming data.
An image frame in a video is modeled by an ensemble of two distinct deep neural networks.
We incorporate the hybrid representation technique into an online visual tracking task.
arXiv Detail & Related papers (2022-05-23T10:31:14Z) - Stochastic resonance neurons in artificial neural networks [0.0]
We propose a new type of neural networks using resonances as an inherent part of the architecture.
We show that such a neural network is more robust against the impact of noise.
arXiv Detail & Related papers (2022-05-06T18:42:36Z) - Mitigating Performance Saturation in Neural Marked Point Processes:
Architectures and Loss Functions [50.674773358075015]
We propose a simple graph-based network structure called GCHP, which utilizes only graph convolutional layers.
We show that GCHP can significantly reduce training time and the likelihood ratio loss with interarrival time probability assumptions can greatly improve the model performance.
arXiv Detail & Related papers (2021-07-07T16:59:14Z) - Learning Frequency-aware Dynamic Network for Efficient Super-Resolution [56.98668484450857]
This paper explores a novel frequency-aware dynamic network for dividing the input into multiple parts according to its coefficients in the discrete cosine transform (DCT) domain.
In practice, the high-frequency part will be processed using expensive operations and the lower-frequency part is assigned with cheap operations to relieve the computation burden.
Experiments conducted on benchmark SISR models and datasets show that the frequency-aware dynamic network can be employed for various SISR neural architectures.
arXiv Detail & Related papers (2021-03-15T12:54:26Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z) - Binary Neural Networks for Memory-Efficient and Effective Visual Place
Recognition in Changing Environments [24.674034243725455]
Visual place recognition (VPR) is a robot's ability to determine whether a place was visited before using visual data.
CNN-based approaches are unsuitable for resource-constrained platforms, such as small robots and drones.
We propose a new class of highly compact models that drastically reduces the memory requirements and computational effort.
arXiv Detail & Related papers (2020-10-01T22:59:34Z) - Reservoir Memory Machines as Neural Computers [70.5993855765376]
Differentiable neural computers extend artificial neural networks with an explicit memory without interference.
We achieve some of the computational capabilities of differentiable neural computers with a model that can be trained very efficiently.
arXiv Detail & Related papers (2020-09-14T12:01:30Z) - A Light-Weighted Convolutional Neural Network for Bitemporal SAR Image
Change Detection [40.58864817923371]
We propose a lightweight neural network to reduce the computational and spatial complexity.
In the proposed network, we replace normal convolutional layers with bottleneck layers that keep the same number of channels between input and output.
We verify our light-weighted neural network on four sets of bitemporal SAR images.
arXiv Detail & Related papers (2020-05-29T04:01:32Z) - Mixed-Precision Quantized Neural Network with Progressively Decreasing
Bitwidth For Image Classification and Object Detection [21.48875255723581]
A mixed-precision quantized neural network with progressively ecreasing bitwidth is proposed to improve the trade-off between accuracy and compression.
Experiments on typical network architectures and benchmark datasets demonstrate that the proposed method could achieve better or comparable results.
arXiv Detail & Related papers (2019-12-29T14:11:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.