Reconstruction-guided attention improves the robustness and shape
processing of neural networks
- URL: http://arxiv.org/abs/2209.13620v1
- Date: Tue, 27 Sep 2022 18:32:22 GMT
- Title: Reconstruction-guided attention improves the robustness and shape
processing of neural networks
- Authors: Seoyoung Ahn, Hossein Adeli, Gregory J. Zelinsky
- Abstract summary: We build an iterative encoder-decoder network that generates an object reconstruction and uses it as top-down attentional feedback.
Our model shows strong generalization performance against various image perturbations.
Our study shows that modeling reconstruction-based feedback endows AI systems with a powerful attention mechanism.
- Score: 5.156484100374057
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Many visual phenomena suggest that humans use top-down generative or
reconstructive processes to create visual percepts (e.g., imagery, object
completion, pareidolia), but little is known about the role reconstruction
plays in robust object recognition. We built an iterative encoder-decoder
network that generates an object reconstruction and used it as top-down
attentional feedback to route the most relevant spatial and feature information
to feed-forward object recognition processes. We tested this model using the
challenging out-of-distribution digit recognition dataset, MNIST-C, where 15
different types of transformation and corruption are applied to handwritten
digit images. Our model showed strong generalization performance against
various image perturbations, on average outperforming all other models
including feedforward CNNs and adversarially trained networks. Our model is
particularly robust to blur, noise, and occlusion corruptions, where shape
perception plays an important role. Ablation studies further reveal two
complementary roles of spatial and feature-based attention in robust object
recognition, with the former largely consistent with spatial masking benefits
in the attention literature (the reconstruction serves as a mask) and the
latter mainly contributing to the model's inference speed (i.e., number of time
steps to reach a certain confidence threshold) by reducing the space of
possible object hypotheses. We also observed that the model sometimes
hallucinates a non-existing pattern out of noise, leading to highly
interpretable human-like errors. Our study shows that modeling
reconstruction-based feedback endows AI systems with a powerful attention
mechanism, which can help us understand the role of generating perception in
human visual processing.
Related papers
- The Dynamic Net Architecture: Learning Robust and Holistic Visual Representations Through Self-Organizing Networks [3.9848584845601014]
We present a novel intelligent-system architecture called "Dynamic Net Architecture" (DNA)
DNA relies on recurrence-stabilized networks and discuss it in application to vision.
arXiv Detail & Related papers (2024-07-08T06:22:10Z) - Towards Evaluating the Robustness of Visual State Space Models [63.14954591606638]
Vision State Space Models (VSSMs) have demonstrated remarkable performance in visual perception tasks.
However, their robustness under natural and adversarial perturbations remains a critical concern.
We present a comprehensive evaluation of VSSMs' robustness under various perturbation scenarios.
arXiv Detail & Related papers (2024-06-13T17:59:44Z) - Attention-Guided Masked Autoencoders For Learning Image Representations [16.257915216763692]
Masked autoencoders (MAEs) have established themselves as a powerful method for unsupervised pre-training for computer vision tasks.
We propose to inform the reconstruction process through an attention-guided loss function.
Our evaluations show that our pre-trained models learn better latent representations than the vanilla MAE.
arXiv Detail & Related papers (2024-02-23T08:11:25Z) - Graphical Object-Centric Actor-Critic [55.2480439325792]
We propose a novel object-centric reinforcement learning algorithm combining actor-critic and model-based approaches.
We use a transformer encoder to extract object representations and graph neural networks to approximate the dynamics of an environment.
Our algorithm performs better in a visually complex 3D robotic environment and a 2D environment with compositional structure than the state-of-the-art model-free actor-critic algorithm.
arXiv Detail & Related papers (2023-10-26T06:05:12Z) - RestoreFormer++: Towards Real-World Blind Face Restoration from
Undegraded Key-Value Pairs [63.991802204929485]
Blind face restoration aims at recovering high-quality face images from those with unknown degradations.
Current algorithms mainly introduce priors to complement high-quality details and achieve impressive progress.
We propose RestoreFormer++, which introduces fully-spatial attention mechanisms to model the contextual information and the interplay with the priors.
We show that RestoreFormer++ outperforms state-of-the-art algorithms on both synthetic and real-world datasets.
arXiv Detail & Related papers (2023-08-14T16:04:53Z) - Human Eyes Inspired Recurrent Neural Networks are More Robust Against Adversarial Noises [7.689542442882423]
We designed a dual-stream vision model inspired by the human brain.
This model features retina-like input layers and includes two streams: one determining the next point of focus (the fixation), while the other interprets the visuals surrounding the fixation.
We evaluated this model against various benchmarks in terms of object recognition, gaze behavior and adversarial robustness.
arXiv Detail & Related papers (2022-06-15T03:44:42Z) - Recurrent Attention Models with Object-centric Capsule Representation
for Multi-object Recognition [4.143091738981101]
We show that an object-centric hidden representation in an encoder-decoder model with iterative glimpse attention yields effective integration of attention and recognition.
Our work takes a step toward a general architecture for how to integrate recurrent object-centric representation into the planning of attentional glimpses.
arXiv Detail & Related papers (2021-10-11T01:41:21Z) - Understanding invariance via feedforward inversion of discriminatively
trained classifiers [30.23199531528357]
Past research has discovered that some extraneous visual detail remains in the output logits.
We develop a feedforward inversion model that produces remarkably high fidelity reconstructions.
Our approach is based on BigGAN, with conditioning on logits instead of one-hot class labels.
arXiv Detail & Related papers (2021-03-15T17:56:06Z) - Secrets of 3D Implicit Object Shape Reconstruction in the Wild [92.5554695397653]
Reconstructing high-fidelity 3D objects from sparse, partial observation is crucial for various applications in computer vision, robotics, and graphics.
Recent neural implicit modeling methods show promising results on synthetic or dense datasets.
But, they perform poorly on real-world data that is sparse and noisy.
This paper analyzes the root cause of such deficient performance of a popular neural implicit model.
arXiv Detail & Related papers (2021-01-18T03:24:48Z) - Progressive Self-Guided Loss for Salient Object Detection [102.35488902433896]
We present a progressive self-guided loss function to facilitate deep learning-based salient object detection in images.
Our framework takes advantage of adaptively aggregated multi-scale features to locate and detect salient objects effectively.
arXiv Detail & Related papers (2021-01-07T07:33:38Z) - Limited-angle tomographic reconstruction of dense layered objects by
dynamical machine learning [68.9515120904028]
Limited-angle tomography of strongly scattering quasi-transparent objects is a challenging, highly ill-posed problem.
Regularizing priors are necessary to reduce artifacts by improving the condition of such problems.
We devised a recurrent neural network (RNN) architecture with a novel split-convolutional gated recurrent unit (SC-GRU) as the building block.
arXiv Detail & Related papers (2020-07-21T11:48:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.