Predictive coding feedback results in perceived illusory contours in a
recurrent neural network
- URL: http://arxiv.org/abs/2102.01955v1
- Date: Wed, 3 Feb 2021 09:07:09 GMT
- Title: Predictive coding feedback results in perceived illusory contours in a
recurrent neural network
- Authors: Zhaoyang Pang, Callum Biggs O'May, Bhavin Choksi, Rufin VanRullen
- Abstract summary: We equip a deep feedforward convolutional network with brain-inspired recurrent dynamics.
We show that the perception of illusory contours could involve feedback connections.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern feedforward convolutional neural networks (CNNs) can now solve some
computer vision tasks at super-human levels. However, these networks only
roughly mimic human visual perception. One difference from human vision is that
they do not appear to perceive illusory contours (e.g. Kanizsa squares) in the
same way humans do. Physiological evidence from visual cortex suggests that the
perception of illusory contours could involve feedback connections. Would
recurrent feedback neural networks perceive illusory contours like humans? In
this work we equip a deep feedforward convolutional network with brain-inspired
recurrent dynamics. The network was first pretrained with an unsupervised
reconstruction objective on a natural image dataset, to expose it to natural
object contour statistics. Then, a classification decision layer was added and
the model was finetuned on a form discrimination task: squares vs. randomly
oriented inducer shapes (no illusory contour). Finally, the model was tested
with the unfamiliar "illusory contour" configuration: inducer shapes oriented
to form an illusory square. Compared with feedforward baselines, the iterative
"predictive coding" feedback resulted in more illusory contours being
classified as physical squares. The perception of the illusory contour was
measurable in the luminance profile of the image reconstructions produced by
the model, demonstrating that the model really "sees" the illusion. Ablation
studies revealed that natural image pretraining and feedback error correction
are both critical to the perception of the illusion. Finally we validated our
conclusions in a deeper network (VGG): adding the same predictive coding
feedback dynamics again leads to the perception of illusory contours.
Related papers
- Investigating the Gestalt Principle of Closure in Deep Convolutional Neural Networks [4.406699323036466]
This study investigates the principle of closure in convolutional neural networks.
We conduct experiments using simple visual stimuli with progressively removed edge sections.
We evaluate well-known networks on their ability to classify incomplete polygons.
arXiv Detail & Related papers (2024-11-01T14:36:21Z) - Emergence of Shape Bias in Convolutional Neural Networks through
Activation Sparsity [8.54598311798543]
Current deep-learning models for object recognition are heavily biased toward texture.
In contrast, human visual systems are known to be biased toward shape and structure.
We show that sparse coding, a ubiquitous principle in the brain, can in itself introduce shape bias into the network.
arXiv Detail & Related papers (2023-10-29T04:07:52Z) - Degraded Polygons Raise Fundamental Questions of Neural Network Perception [5.423100066629618]
We revisit the task of recovering images under degradation, first introduced over 30 years ago in the Recognition-by-Components theory of human vision.
We implement the Automated Shape Recoverability Test for rapidly generating large-scale datasets of perimeter-degraded regular polygons.
We find that neural networks' behavior on this simple task conflicts with human behavior.
arXiv Detail & Related papers (2023-06-08T06:02:39Z) - Don't trust your eyes: on the (un)reliability of feature visualizations [25.018840023636546]
We show how to trick feature visualizations into showing arbitrary patterns that are completely disconnected from normal network behavior on natural input.
We then provide evidence for a similar phenomenon occurring in standard, unmanipulated networks.
This can be used as a sanity check for feature visualizations.
arXiv Detail & Related papers (2023-06-07T18:31:39Z) - An Extended Study of Human-like Behavior under Adversarial Training [11.72025865314187]
We show that adversarial training increases the shift toward shape bias in neural networks.
We also provide a possible explanation for this phenomenon from a frequency perspective.
arXiv Detail & Related papers (2023-03-22T15:47:16Z) - Prune and distill: similar reformatting of image information along rat
visual cortex and deep neural networks [61.60177890353585]
Deep convolutional neural networks (CNNs) have been shown to provide excellent models for its functional analogue in the brain, the ventral stream in visual cortex.
Here we consider some prominent statistical patterns that are known to exist in the internal representations of either CNNs or the visual cortex.
We show that CNNs and visual cortex share a similarly tight relationship between dimensionality expansion/reduction of object representations and reformatting of image information.
arXiv Detail & Related papers (2022-05-27T08:06:40Z) - NeuralReshaper: Single-image Human-body Retouching with Deep Neural
Networks [50.40798258968408]
We present NeuralReshaper, a novel method for semantic reshaping of human bodies in single images using deep generative networks.
Our approach follows a fit-then-reshape pipeline, which first fits a parametric 3D human model to a source human image.
To deal with the lack-of-data problem that no paired data exist, we introduce a novel self-supervised strategy to train our network.
arXiv Detail & Related papers (2022-03-20T09:02:13Z) - Is Deep Image Prior in Need of a Good Education? [57.3399060347311]
Deep image prior was introduced as an effective prior for image reconstruction.
Despite its impressive reconstructive properties, the approach is slow when compared to learned or traditional reconstruction techniques.
We develop a two-stage learning paradigm to address the computational challenge.
arXiv Detail & Related papers (2021-11-23T15:08:26Z) - Neural Re-Rendering of Humans from a Single Image [80.53438609047896]
We propose a new method for neural re-rendering of a human under a novel user-defined pose and viewpoint.
Our algorithm represents body pose and shape as a parametric mesh which can be reconstructed from a single image.
arXiv Detail & Related papers (2021-01-11T18:53:47Z) - Compressive sensing with un-trained neural networks: Gradient descent
finds the smoothest approximation [60.80172153614544]
Un-trained convolutional neural networks have emerged as highly successful tools for image recovery and restoration.
We show that an un-trained convolutional neural network can approximately reconstruct signals and images that are sufficiently structured, from a near minimal number of random measurements.
arXiv Detail & Related papers (2020-05-07T15:57:25Z) - Self-Supervised Linear Motion Deblurring [112.75317069916579]
Deep convolutional neural networks are state-of-the-art for image deblurring.
We present a differentiable reblur model for self-supervised motion deblurring.
Our experiments demonstrate that self-supervised single image deblurring is really feasible.
arXiv Detail & Related papers (2020-02-10T20:15:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.