RetinotopicNet: An Iterative Attention Mechanism Using Local Descriptors
with Global Context
- URL: http://arxiv.org/abs/2005.05701v1
- Date: Tue, 12 May 2020 11:54:56 GMT
- Title: RetinotopicNet: An Iterative Attention Mechanism Using Local Descriptors
with Global Context
- Authors: Thomas Kurbiel and Shahrzad Khaleghian
- Abstract summary: Convolutional Neural Networks (CNNs) were the driving force behind many advancements in Computer Vision research in recent years.
CNNs lack the property of scale and rotation invariance: two of the most frequently encountered transformations in natural images.
We develop an efficient solution by reproducing how nature has solved the problem in the human brain.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Convolutional Neural Networks (CNNs) were the driving force behind many
advancements in Computer Vision research in recent years. This progress has
spawned many practical applications and we see an increased need to efficiently
move CNNs to embedded systems today. However traditional CNNs lack the property
of scale and rotation invariance: two of the most frequently encountered
transformations in natural images. As a consequence CNNs have to learn
different features for same objects at different scales. This redundancy is the
main reason why CNNs need to be very deep in order to achieve the desired
accuracy. In this paper we develop an efficient solution by reproducing how
nature has solved the problem in the human brain. To this end we let our CNN
operate on small patches extracted using the log-polar transform, which is
known to be scale and rotation equivariant. Patches extracted in this way have
the nice property of magnifying the central field and compressing the
periphery. Hence we obtain local descriptors with global context information.
However the processing of a single patch is usually not sufficient to achieve
high accuracies in e.g. classification tasks. We therefore successively jump to
several different locations, called saccades, thus building an understanding of
the whole image. Since log-polar patches contain global context information, we
can efficiently calculate following saccades using only the small patches.
Saccades efficiently compensate for the lack of translation equivariance of the
log-polar transform.
Related papers
- DAS: A Deformable Attention to Capture Salient Information in CNNs [2.321323878201932]
Self-attention can improve a model's access to global information but increases computational overhead.
We present a fast and simple fully convolutional method called DAS that helps focus attention on relevant information.
arXiv Detail & Related papers (2023-11-20T18:49:58Z) - RIC-CNN: Rotation-Invariant Coordinate Convolutional Neural Network [56.42518353373004]
We propose a new convolutional operation, called Rotation-Invariant Coordinate Convolution (RIC-C)
By replacing all standard convolutional layers in a CNN with the corresponding RIC-C, a RIC-CNN can be derived.
It can be observed that RIC-CNN achieves the state-of-the-art classification on the rotated test dataset of MNIST.
arXiv Detail & Related papers (2022-11-21T19:27:02Z) - TransGeo: Transformer Is All You Need for Cross-view Image
Geo-localization [81.70547404891099]
CNN-based methods for cross-view image geo-localization fail to model global correlation.
We propose a pure transformer-based approach (TransGeo) to address these limitations.
TransGeo achieves state-of-the-art results on both urban and rural datasets.
arXiv Detail & Related papers (2022-03-31T21:19:41Z) - CoTr: Efficiently Bridging CNN and Transformer for 3D Medical Image
Segmentation [95.51455777713092]
Convolutional neural networks (CNNs) have been the de facto standard for nowadays 3D medical image segmentation.
We propose a novel framework that efficiently bridges a bf Convolutional neural network and a bf Transformer bf (CoTr) for accurate 3D medical image segmentation.
arXiv Detail & Related papers (2021-03-04T13:34:22Z) - Overcoming Catastrophic Forgetting in Graph Neural Networks [50.900153089330175]
Catastrophic forgetting refers to the tendency that a neural network "forgets" the previous learned knowledge upon learning new tasks.
We propose a novel scheme dedicated to overcoming this problem and hence strengthen continual learning in graph neural networks (GNNs)
At the heart of our approach is a generic module, termed as topology-aware weight preserving(TWP)
arXiv Detail & Related papers (2020-12-10T22:30:25Z) - What Does CNN Shift Invariance Look Like? A Visualization Study [87.79405274610681]
Feature extraction with convolutional neural networks (CNNs) is a popular method to represent images for machine learning tasks.
We focus on measuring and visualizing the shift invariance of extracted features from popular off-the-shelf CNN models.
We conclude that features extracted from popular networks are not globally invariant, and that biases and artifacts exist within this variance.
arXiv Detail & Related papers (2020-11-09T01:16:30Z) - Learning Translation Invariance in CNNs [1.52292571922932]
We show how, even though CNNs are not 'architecturally invariant' to translation, they can indeed 'learn' to be invariant to translation.
We investigated how this pretraining affected the internal network representations.
These experiments show how pretraining a network on an environment with the right 'latent' characteristics can result in the network learning deep perceptual rules.
arXiv Detail & Related papers (2020-11-06T09:39:27Z) - Localized convolutional neural networks for geospatial wind forecasting [0.0]
Convolutional Neural Networks (CNN) possess positive qualities when it comes to many spatial data.
In this work, we propose localized convolutional neural networks that enable CNNs to learn local features in addition to the global ones.
They can be added to any convolutional layers, easily end-to-end trained, introduce minimal additional complexity, and let CNNs retain most of their benefits to the extent that they are needed.
arXiv Detail & Related papers (2020-05-12T17:14:49Z) - Approximation and Non-parametric Estimation of ResNet-type Convolutional
Neural Networks [52.972605601174955]
We show a ResNet-type CNN can attain the minimax optimal error rates in important function classes.
We derive approximation and estimation error rates of the aformentioned type of CNNs for the Barron and H"older classes.
arXiv Detail & Related papers (2019-03-24T19:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.