A Differential Model of the Complex Cell
- URL: http://arxiv.org/abs/2012.09027v1
- Date: Wed, 9 Dec 2020 10:23:23 GMT
- Title: A Differential Model of the Complex Cell
- Authors: Miles Hansard and Radu Horaud
- Abstract summary: This paper proposes an alternative model of the complex cell, based on Gaussian derivatives.
It is most important to account for the insensitivity of the complex response to small shifts of the image.
The relevance of the new model to the cortical image representation is discussed.
- Score: 24.756003635916613
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The receptive fields of simple cells in the visual cortex can be understood
as linear filters. These filters can be modelled by Gabor functions, or by
Gaussian derivatives. Gabor functions can also be combined in an `energy model'
of the complex cell response. This paper proposes an alternative model of the
complex cell, based on Gaussian derivatives. It is most important to account
for the insensitivity of the complex response to small shifts of the image. The
new model uses a linear combination of the first few derivative filters, at a
single position, to approximate the first derivative filter, at a series of
adjacent positions. The maximum response, over all positions, gives a signal
that is insensitive to small shifts of the image. This model, unlike previous
approaches, is based on the scale space theory of visual processing. In
particular, the complex cell is built from filters that respond to the \twod\
differential structure of the image. The computational aspects of the new model
are studied in one and two dimensions, using the steerability of the Gaussian
derivatives. The response of the model to basic images, such as edges and
gratings, is derived formally. The response to natural images is also
evaluated, using statistical measures of shift insensitivity. The relevance of
the new model to the cortical image representation is discussed.
Related papers
- A nonlinear elasticity model in computer vision [0.0]
The purpose of this paper is to analyze a nonlinear elasticity model previously introduced by the authors for comparing two images.
The existence of transformations is proved among derivatives of $-valued pairs of gradient vector-valued intensity maps.
The question is as to whether for images related by a linear mapping the uniquer is given by that.
arXiv Detail & Related papers (2024-08-30T12:27:22Z) - Direct Motif Extraction from High Resolution Crystalline STEM Images [2.2660999029854536]
An automatic, unsupervised motif extraction is still not widely available yet.
A novel multi-stage projection algorithm is used to determine primitive cell minimization.
The method was tested on various synthetic and experimental HAADF STEM images.
arXiv Detail & Related papers (2023-03-13T19:35:54Z) - Git Re-Basin: Merging Models modulo Permutation Symmetries [3.5450828190071655]
We show how simple algorithms can be used to fit large networks in practice.
We demonstrate the first (to our knowledge) demonstration of zero mode connectivity between independently trained models.
We also discuss shortcomings in the linear mode connectivity hypothesis.
arXiv Detail & Related papers (2022-09-11T10:44:27Z) - Decoupling multivariate functions using a nonparametric filtered tensor
decomposition [0.29360071145551075]
Decoupling techniques aim at providing an alternative representation of the nonlinearity.
The so-called decoupled form is often a more efficient parameterisation of the relationship while being highly structured, favouring interpretability.
In this work two new algorithms, based on filtered tensor decompositions of first order derivative information are introduced.
arXiv Detail & Related papers (2022-05-23T09:34:17Z) - Deep Learning for the Benes Filter [91.3755431537592]
We present a new numerical method based on the mesh-free neural network representation of the density of the solution of the Benes model.
We discuss the role of nonlinearity in the filtering model equations for the choice of the domain of the neural network.
arXiv Detail & Related papers (2022-03-09T14:08:38Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Emergence of Lie symmetries in functional architectures learned by CNNs [63.69764116066748]
We study the spontaneous development of symmetries in the early layers of a Convolutional Neural Network (CNN) during learning on natural images.
Our architecture is built in such a way to mimic the early stages of biological visual systems.
arXiv Detail & Related papers (2021-04-17T13:23:26Z) - Joint Estimation of Image Representations and their Lie Invariants [57.3768308075675]
Images encode both the state of the world and its content.
The automatic extraction of this information is challenging because of the high-dimensionality and entangled encoding inherent to the image representation.
This article introduces two theoretical approaches aimed at the resolution of these challenges.
arXiv Detail & Related papers (2020-12-05T00:07:41Z) - Generalizing Convolutional Neural Networks for Equivariance to Lie
Groups on Arbitrary Continuous Data [52.78581260260455]
We propose a general method to construct a convolutional layer that is equivariant to transformations from any specified Lie group.
We apply the same model architecture to images, ball-and-stick molecular data, and Hamiltonian dynamical systems.
arXiv Detail & Related papers (2020-02-25T17:40:38Z) - Learning Bijective Feature Maps for Linear ICA [73.85904548374575]
We show that existing probabilistic deep generative models (DGMs) which are tailor-made for image data, underperform on non-linear ICA tasks.
To address this, we propose a DGM which combines bijective feature maps with a linear ICA model to learn interpretable latent structures for high-dimensional data.
We create models that converge quickly, are easy to train, and achieve better unsupervised latent factor discovery than flow-based models, linear ICA, and Variational Autoencoders on images.
arXiv Detail & Related papers (2020-02-18T17:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.