Visual stream connectivity predicts assessments of image quality
- URL: http://arxiv.org/abs/2008.06939v1
- Date: Sun, 16 Aug 2020 15:38:17 GMT
- Title: Visual stream connectivity predicts assessments of image quality
- Authors: Elijah Bowen, Antonio Rodriguez, Damian Sowinski, Richard Granger
- Abstract summary: We derive a novel formalization of the psychophysics of similarity, showing the differential geometry that provides accurate and explanatory accounts of perceptual similarity judgments.
Predictions are further improved via simple regression on human behavioral reports, which in turn are used to construct more elaborate hypothesized neural connectivity patterns.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Some biological mechanisms of early vision are comparatively well understood,
but they have yet to be evaluated for their ability to accurately predict and
explain human judgments of image similarity. From well-studied simple
connectivity patterns in early vision, we derive a novel formalization of the
psychophysics of similarity, showing the differential geometry that provides
accurate and explanatory accounts of perceptual similarity judgments. These
predictions then are further improved via simple regression on human behavioral
reports, which in turn are used to construct more elaborate hypothesized neural
connectivity patterns. Both approaches outperform standard successful measures
of perceived image fidelity from the literature, as well as providing
explanatory principles of similarity perception.
Related papers
- Conjuring Semantic Similarity [59.18714889874088]
The semantic similarity between two textual expressions measures the distance between their latent'meaning'
We propose a novel approach whereby the semantic similarity among textual expressions is based not on other expressions they can be rephrased as, but rather based on the imagery they evoke.
Our method contributes a novel perspective on semantic similarity that not only aligns with human-annotated scores, but also opens up new avenues for the evaluation of text-conditioned generative models.
arXiv Detail & Related papers (2024-10-21T18:51:34Z) - When Does Perceptual Alignment Benefit Vision Representations? [76.32336818860965]
We investigate how aligning vision model representations to human perceptual judgments impacts their usability.
We find that aligning models to perceptual judgments yields representations that improve upon the original backbones across many downstream tasks.
Our results suggest that injecting an inductive bias about human perceptual knowledge into vision models can contribute to better representations.
arXiv Detail & Related papers (2024-10-14T17:59:58Z) - Connecting Concept Convexity and Human-Machine Alignment in Deep Neural Networks [3.001674556825579]
Understanding how neural networks align with human cognitive processes is a crucial step toward developing more interpretable and reliable AI systems.
We identify a correlation between these two dimensions that reflect the similarity relations humans in cognitive tasks.
This presents a first step toward understanding the relationship convexity between human-machine alignment.
arXiv Detail & Related papers (2024-09-10T09:32:16Z) - Disentangling the Link Between Image Statistics and Human Perception [47.912998421927085]
In the 1950s, Barlow and Attneave hypothesised a link between biological vision and information maximisation.
We show how probability-related factors can be combined to predict human perception via sensitivity of state-of-the-art subjective image quality metrics.
arXiv Detail & Related papers (2023-03-17T10:38:27Z) - An Inter-observer consistent deep adversarial training for visual
scanpath prediction [66.46953851227454]
We propose an inter-observer consistent adversarial training approach for scanpath prediction through a lightweight deep neural network.
We show the competitiveness of our approach in regard to state-of-the-art methods.
arXiv Detail & Related papers (2022-11-14T13:22:29Z) - Perceptual Attacks of No-Reference Image Quality Models with
Human-in-the-Loop [113.75573175709573]
We make one of the first attempts to examine the perceptual robustness of NR-IQA models.
We test one knowledge-driven and three data-driven NR-IQA methods under four full-reference IQA models.
We find that all four NR-IQA models are vulnerable to the proposed perceptual attack.
arXiv Detail & Related papers (2022-10-03T13:47:16Z) - Zero-shot visual reasoning through probabilistic analogical mapping [2.049767929976436]
We present visiPAM (visual Probabilistic Analogical Mapping), a model of visual reasoning that synthesizes two approaches.
We show that without any direct training, visiPAM outperforms a state-of-the-art deep learning model on an analogical mapping task.
In addition, visiPAM closely matches the pattern of human performance on a novel task involving mapping of 3D objects across disparate categories.
arXiv Detail & Related papers (2022-09-29T20:29:26Z) - Hybrid Predictive Coding: Inferring, Fast and Slow [62.997667081978825]
We propose a hybrid predictive coding network that combines both iterative and amortized inference in a principled manner.
We demonstrate that our model is inherently sensitive to its uncertainty and adaptively balances balances to obtain accurate beliefs using minimum computational expense.
arXiv Detail & Related papers (2022-04-05T12:52:45Z) - Predicting Human Similarity Judgments Using Large Language Models [13.33450619901885]
We propose an efficient procedure for predicting similarity judgments based on text descriptions.
The number of descriptions required grows only linearly with the number of stimuli, drastically reducing the amount of data required.
We test this procedure on six datasets of naturalistic images and show that our models outperform previous approaches based on visual information.
arXiv Detail & Related papers (2022-02-09T21:09:25Z) - Probabilistic Analogical Mapping with Semantic Relation Networks [2.084078990567849]
We present a new computational model of analogical mapping, based on semantic relation networks.
We show that the model accounts for a broad range of phenomena involving analogical mapping by both adults and children.
arXiv Detail & Related papers (2021-03-30T22:14:13Z) - Transforming Neural Network Visual Representations to Predict Human
Judgments of Similarity [12.5719993304358]
We investigate how to bring machine visual representations into better alignment with human representations.
We find that with appropriate linear transformations of deep embeddings, we can improve prediction of human binary choice.
arXiv Detail & Related papers (2020-10-13T16:09:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.