Affinity-based Attention in Self-supervised Transformers Predicts
Dynamics of Object Grouping in Humans
- URL: http://arxiv.org/abs/2306.00294v1
- Date: Thu, 1 Jun 2023 02:25:55 GMT
- Title: Affinity-based Attention in Self-supervised Transformers Predicts
Dynamics of Object Grouping in Humans
- Authors: Hossein Adeli, Seoyoung Ahn, Nikolaus Kriegeskorte, Gregory Zelinsky
- Abstract summary: We propose a model of human object-based attention spreading and segmentation.
Our work provides new benchmarks for evaluating models of visual representation learning including Transformers.
- Score: 2.485182034310303
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The spreading of attention has been proposed as a mechanism for how humans
group features to segment objects. However, such a mechanism has not yet been
implemented and tested in naturalistic images. Here, we leverage the feature
maps from self-supervised vision Transformers and propose a model of human
object-based attention spreading and segmentation. Attention spreads within an
object through the feature affinity signal between different patches of the
image. We also collected behavioral data on people grouping objects in natural
images by judging whether two dots are on the same object or on two different
objects. We found that our models of affinity spread that were built on feature
maps from the self-supervised Transformers showed significant improvement over
baseline and CNN based models on predicting reaction time patterns of humans,
despite not being trained on the task or with any other object labels. Our work
provides new benchmarks for evaluating models of visual representation learning
including Transformers.
Related papers
- Evaluating Multiview Object Consistency in Humans and Image Models [68.36073530804296]
We leverage an experimental design from the cognitive sciences which requires zero-shot visual inferences about object shape.
We collect 35K trials of behavioral data from over 500 participants.
We then evaluate the performance of common vision models.
arXiv Detail & Related papers (2024-09-09T17:59:13Z) - Towards a Unified Transformer-based Framework for Scene Graph Generation
and Human-object Interaction Detection [116.21529970404653]
We introduce SG2HOI+, a unified one-step model based on the Transformer architecture.
Our approach employs two interactive hierarchical Transformers to seamlessly unify the tasks of SGG and HOI detection.
Our approach achieves competitive performance when compared to state-of-the-art HOI methods.
arXiv Detail & Related papers (2023-11-03T07:25:57Z) - Optical Flow boosts Unsupervised Localization and Segmentation [22.625511865323183]
We propose a new loss term formulation that uses optical flow in unlabeled videos to encourage self-supervised ViT features to become closer to each other.
We use the proposed loss function to finetune vision transformers that were originally trained on static images.
arXiv Detail & Related papers (2023-07-25T16:45:35Z) - Self-attention in Vision Transformers Performs Perceptual Grouping, Not
Attention [11.789983276366986]
We show that attention mechanisms in vision transformers exhibit similar effects as those known in human visual attention.
Our results suggest that self-attention modules group figures in the stimuli based on similarity in visual features such as color.
In a singleton detection experiment, we studied if these models exhibit similar effects as those of feed-forward visual salience mechanisms utilized in human visual attention.
arXiv Detail & Related papers (2023-03-02T19:18:11Z) - Learning Explicit Object-Centric Representations with Vision
Transformers [81.38804205212425]
We build on the self-supervision task of masked autoencoding and explore its effectiveness for learning object-centric representations with transformers.
We show that the model efficiently learns to decompose simple scenes as measured by segmentation metrics on several multi-object benchmarks.
arXiv Detail & Related papers (2022-10-25T16:39:49Z) - Object-Region Video Transformers [100.23380634952083]
We present Object-Region Transformers Video (ORViT), an emphobject-centric approach that extends transformer video layers with object representations.
Our ORViT block consists of two object-level streams: appearance and dynamics.
We show strong improvement in performance across all tasks and considered, demonstrating the value of a model that incorporates object representations into a transformer architecture.
arXiv Detail & Related papers (2021-10-13T17:51:46Z) - HOTR: End-to-End Human-Object Interaction Detection with Transformers [26.664864824357164]
We present a novel framework, referred to by HOTR, which directly predicts a set of human, object, interaction> triplets from an image.
Our proposed algorithm achieves the state-of-the-art performance in two HOI detection benchmarks with an inference time under 1 ms after object detection.
arXiv Detail & Related papers (2021-04-28T10:10:29Z) - DyStaB: Unsupervised Object Segmentation via Dynamic-Static
Bootstrapping [72.84991726271024]
We describe an unsupervised method to detect and segment portions of images of live scenes that are seen moving as a coherent whole.
Our method first partitions the motion field by minimizing the mutual information between segments.
It uses the segments to learn object models that can be used for detection in a static image.
arXiv Detail & Related papers (2020-08-16T22:05:13Z) - Object-Centric Image Generation from Layouts [93.10217725729468]
We develop a layout-to-image-generation method to generate complex scenes with multiple objects.
Our method learns representations of the spatial relationships between objects in the scene, which lead to our model's improved layout-fidelity.
We introduce SceneFID, an object-centric adaptation of the popular Fr'echet Inception Distance metric, that is better suited for multi-object images.
arXiv Detail & Related papers (2020-03-16T21:40:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.