How Do Neural Networks Estimate Optical Flow? A Neuropsychology-Inspired
Study
- URL: http://arxiv.org/abs/2004.09317v2
- Date: Wed, 2 Jun 2021 08:16:45 GMT
- Title: How Do Neural Networks Estimate Optical Flow? A Neuropsychology-Inspired
Study
- Authors: D. B. de Jong, F. Paredes-Vall\'es, G. C. H. E. de Croon
- Abstract summary: In this article, we investigate how deep neural networks estimate optical flow.
For our investigation, we focus on FlowNetS, as it is the prototype of an encoder-decoder neural network for optical flow estimation.
We use a filter identification method that has played a major role in uncovering the motion filters present in animal brains in neuropsychological research.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: End-to-end trained convolutional neural networks have led to a breakthrough
in optical flow estimation. The most recent advances focus on improving the
optical flow estimation by improving the architecture and setting a new
benchmark on the publicly available MPI-Sintel dataset. Instead, in this
article, we investigate how deep neural networks estimate optical flow. A
better understanding of how these networks function is important for (i)
assessing their generalization capabilities to unseen inputs, and (ii)
suggesting changes to improve their performance. For our investigation, we
focus on FlowNetS, as it is the prototype of an encoder-decoder neural network
for optical flow estimation. Furthermore, we use a filter identification method
that has played a major role in uncovering the motion filters present in animal
brains in neuropsychological research. The method shows that the filters in the
deepest layer of FlowNetS are sensitive to a variety of motion patterns. Not
only do we find translation filters, as demonstrated in animal brains, but
thanks to the easier measurements in artificial neural networks, we even unveil
dilation, rotation, and occlusion filters. Furthermore, we find similarities in
the refinement part of the network and the perceptual filling-in process which
occurs in the mammal primary visual cortex.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - A Faster Approach to Spiking Deep Convolutional Neural Networks [0.0]
Spiking neural networks (SNNs) have closer dynamics to the brain than current deep neural networks.
We propose a network structure based on previous work to improve network runtime and accuracy.
arXiv Detail & Related papers (2022-10-31T16:13:15Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Gradient Mask: Lateral Inhibition Mechanism Improves Performance in
Artificial Neural Networks [5.591477512580285]
We propose Gradient Mask, which effectively filters out noise gradients in the process of backpropagation.
This allows the learned feature information to be more intensively stored in the network.
We show analytically how lateral inhibition in artificial neural networks improves the quality of propagated gradients.
arXiv Detail & Related papers (2022-08-14T20:55:50Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - On the role of feedback in visual processing: a predictive coding
perspective [0.6193838300896449]
We consider deep convolutional networks (CNNs) as models of feed-forward visual processing and implement Predictive Coding (PC) dynamics.
We find that the network increasingly relies on top-down predictions as the noise level increases.
In addition, the accuracy of the network implementing PC dynamics significantly increases over time-steps, compared to its equivalent forward network.
arXiv Detail & Related papers (2021-06-08T10:07:23Z) - Self-Supervised Learning of Event-Based Optical Flow with Spiking Neural
Networks [3.7384509727711923]
A major challenge for neuromorphic computing is that learning algorithms for traditional artificial neural networks (ANNs) do not transfer directly to spiking neural networks (SNNs)
In this article, we focus on the self-supervised learning problem of optical flow estimation from event-based camera inputs.
We show that the performance of the proposed ANNs and SNNs are on par with that of the current state-of-the-art ANNs trained in a self-supervised manner.
arXiv Detail & Related papers (2021-06-03T14:03:41Z) - Generalized Approach to Matched Filtering using Neural Networks [4.535489275919893]
We make a key observation on the relationship between the emerging deep learning and the traditional techniques.
matched filtering is formally equivalent to a particular neural network.
We show that the proposed neural network architecture can outperform matched filtering.
arXiv Detail & Related papers (2021-04-08T17:59:07Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.