Important Clues that Facilitate Visual Emergence: Three Psychological
Experiments
- URL: http://arxiv.org/abs/2307.10194v1
- Date: Mon, 10 Jul 2023 13:46:43 GMT
- Title: Important Clues that Facilitate Visual Emergence: Three Psychological
Experiments
- Authors: Jingmeng Li, Hui Wei
- Abstract summary: The density of speckles in the local area and the arrangements of some key speckles played a key role in the perception of an emerging case.
This study designed three psychological experiments to explore the factors that influence the perception of emerging images.
- Score: 4.416484585765028
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Visual emergence is the phenomenon in which the visual system obtains a
holistic perception after grouping and reorganizing local signals. The picture
Dalmatian dog is known for its use in explaining visual emergence. This type of
image, which consists of a set of discrete black speckles (speckles), is called
an emerging image. Not everyone can find the dog in Dalmatian dog, and among
those who can, the time spent varies greatly. Although Gestalt theory
summarizes perceptual organization into several principles, it remains
ambiguous how these principles affect the perception of emerging images. This
study, therefore, designed three psychological experiments to explore the
factors that influence the perception of emerging images. In the first, we
found that the density of speckles in the local area and the arrangements of
some key speckles played a key role in the perception of an emerging case. We
set parameters in the algorithm to characterize these two factors. We then
automatically generated diversified emerging-test images (ETIs) through the
algorithm and verified their effectiveness in two subsequent experiments.
Related papers
- When Does Perceptual Alignment Benefit Vision Representations? [76.32336818860965]
We investigate how aligning vision model representations to human perceptual judgments impacts their usability.
We find that aligning models to perceptual judgments yields representations that improve upon the original backbones across many downstream tasks.
Our results suggest that injecting an inductive bias about human perceptual knowledge into vision models can contribute to better representations.
arXiv Detail & Related papers (2024-10-14T17:59:58Z) - DISentangled Counterfactual Visual interpretER (DISCOVER) generalizes to natural images [0.0]
We show that DISentangled COunterfactual Visual interpretER (DISCOVER) can be applied to the domain of natural images.
First, DISCOVER visually interpreted the nose size, the muzzle area, and the face size as semantic discriminative visual traits discriminating between facial images of dogs versus cats.
Second, DISCOVER visually interpreted the cheeks and jawline, eyebrows and hair, and the eyes, as discriminative facial characteristics.
arXiv Detail & Related papers (2024-06-22T19:05:50Z) - How does the primate brain combine generative and discriminative
computations in vision? [4.691670689443386]
Two contrasting conceptions of the inference process have each been influential in research on biological vision and machine vision.
We show that vision inverts a generative model through an interrogation of the evidence in a process often thought to involve top-down predictions of sensory data.
We explain and clarify the terminology, review the key empirical evidence, and propose an empirical research program that transcends and sets the stage for revealing the mysterious hybrid algorithm of primate vision.
arXiv Detail & Related papers (2024-01-11T16:07:58Z) - Conditions on detecting tripartite entangled state in psychophysical experiments [0.0]
We examine the possibility of human subjects perceiving multipartite entangled state through psychophysical experiments.
To model the photodetection by humans, we employ the probability of seeing determined for coherently amplified photons in Fock number states.
Our results indicate that detecting bipartite and tripartite entanglement with the human eye is possible for a certain range of additive noise levels and visual thresholds.
arXiv Detail & Related papers (2023-03-13T19:56:52Z) - A domain adaptive deep learning solution for scanpath prediction of
paintings [66.46953851227454]
This paper focuses on the eye-movement analysis of viewers during the visual experience of a certain number of paintings.
We introduce a new approach to predicting human visual attention, which impacts several cognitive functions for humans.
The proposed new architecture ingests images and returns scanpaths, a sequence of points featuring a high likelihood of catching viewers' attention.
arXiv Detail & Related papers (2022-09-22T22:27:08Z) - Prune and distill: similar reformatting of image information along rat
visual cortex and deep neural networks [61.60177890353585]
Deep convolutional neural networks (CNNs) have been shown to provide excellent models for its functional analogue in the brain, the ventral stream in visual cortex.
Here we consider some prominent statistical patterns that are known to exist in the internal representations of either CNNs or the visual cortex.
We show that CNNs and visual cortex share a similarly tight relationship between dimensionality expansion/reduction of object representations and reformatting of image information.
arXiv Detail & Related papers (2022-05-27T08:06:40Z) - BARC: Learning to Regress 3D Dog Shape from Images by Exploiting Breed
Information [66.77206206569802]
Our goal is to recover the 3D shape and pose of dogs from a single image.
Recent work has proposed to directly regress the SMAL animal model, with additional limb scale parameters, from images.
Our method, called BARC (Breed-Augmented Regression using Classification), goes beyond prior work in several important ways.
This work shows that a-priori information about genetic similarity can help to compensate for the lack of 3D training data.
arXiv Detail & Related papers (2022-03-29T13:16:06Z) - Fooling the primate brain with minimal, targeted image manipulation [67.78919304747498]
We propose an array of methods for creating minimal, targeted image perturbations that lead to changes in both neuronal activity and perception as reflected in behavior.
Our work shares the same goal with adversarial attack, namely the manipulation of images with minimal, targeted noise that leads ANN models to misclassify the images.
arXiv Detail & Related papers (2020-11-11T08:30:54Z) - Disentangle Perceptual Learning through Online Contrastive Learning [16.534353501066203]
Pursuing realistic results according to human visual perception is the central concern in the image transformation tasks.
In this paper, we argue that, among the features representation from the pre-trained classification network, only limited dimensions are related to human visual perception.
Under such an assumption, we try to disentangle the perception-relevant dimensions from the representation through our proposed online contrastive learning.
arXiv Detail & Related papers (2020-06-24T06:48:38Z) - Visual Chirality [51.685596116645776]
We investigate how statistics of visual data are changed by reflection.
Our work has implications for data augmentation, self-supervised learning, and image forensics.
arXiv Detail & Related papers (2020-06-16T20:48:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.