DreamSim: Learning New Dimensions of Human Visual Similarity using
Synthetic Data
- URL: http://arxiv.org/abs/2306.09344v3
- Date: Fri, 8 Dec 2023 21:02:07 GMT
- Title: DreamSim: Learning New Dimensions of Human Visual Similarity using
Synthetic Data
- Authors: Stephanie Fu, Netanel Tamir, Shobhita Sundaram, Lucy Chai, Richard
Zhang, Tali Dekel, Phillip Isola
- Abstract summary: Current perceptual similarity metrics operate at the level of pixels and patches.
These metrics compare images in terms of their low-level colors and textures, but fail to capture mid-level similarities and differences in image layout, object pose, and semantic content.
We develop a perceptual metric that assesses images holistically.
- Score: 43.247597420676044
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Current perceptual similarity metrics operate at the level of pixels and
patches. These metrics compare images in terms of their low-level colors and
textures, but fail to capture mid-level similarities and differences in image
layout, object pose, and semantic content. In this paper, we develop a
perceptual metric that assesses images holistically. Our first step is to
collect a new dataset of human similarity judgments over image pairs that are
alike in diverse ways. Critical to this dataset is that judgments are nearly
automatic and shared by all observers. To achieve this we use recent
text-to-image models to create synthetic pairs that are perturbed along various
dimensions. We observe that popular perceptual metrics fall short of explaining
our new data, and we introduce a new metric, DreamSim, tuned to better align
with human perception. We analyze how our metric is affected by different
visual attributes, and find that it focuses heavily on foreground objects and
semantic content while also being sensitive to color and layout. Notably,
despite being trained on synthetic data, our metric generalizes to real images,
giving strong results on retrieval and reconstruction tasks. Furthermore, our
metric outperforms both prior learned metrics and recent large vision models on
these tasks.
Related papers
- When Does Perceptual Alignment Benefit Vision Representations? [76.32336818860965]
We investigate how aligning vision model representations to human perceptual judgments impacts their usability.
We find that aligning models to perceptual judgments yields representations that improve upon the original backbones across many downstream tasks.
Our results suggest that injecting an inductive bias about human perceptual knowledge into vision models can contribute to better representations.
arXiv Detail & Related papers (2024-10-14T17:59:58Z) - Stellar: Systematic Evaluation of Human-Centric Personalized
Text-to-Image Methods [52.806258774051216]
We focus on text-to-image systems that input a single image of an individual and ground the generation process along with text describing the desired visual context.
We introduce a standardized dataset (Stellar) that contains personalized prompts coupled with images of individuals that is an order of magnitude larger than existing relevant datasets and where rich semantic ground-truth annotations are readily available.
We derive a simple yet efficient, personalized text-to-image baseline that does not require test-time fine-tuning for each subject and which sets quantitatively and in human trials a new SoTA.
arXiv Detail & Related papers (2023-12-11T04:47:39Z) - Privacy Assessment on Reconstructed Images: Are Existing Evaluation
Metrics Faithful to Human Perception? [86.58989831070426]
We study the faithfulness of hand-crafted metrics to human perception of privacy information from reconstructed images.
We propose a learning-based measure called SemSim to evaluate the Semantic Similarity between the original and reconstructed images.
arXiv Detail & Related papers (2023-09-22T17:58:04Z) - Substance or Style: What Does Your Image Embedding Know? [55.676463077772866]
Image foundation models have primarily been evaluated for semantic content.
We measure the visual content of embeddings along many axes, including image style, quality, and a range of natural and artificial transformations.
We find that image-text models (CLIP and ALIGN) are better at recognizing new examples of style transfer than masking-based models (CAN and MAE)
arXiv Detail & Related papers (2023-07-10T22:40:10Z) - Shift-tolerant Perceptual Similarity Metric [5.326626090397465]
Existing perceptual similarity metrics assume an image and its reference are well aligned.
This paper studies the effect of small misalignment, specifically a small shift between the input and reference image, on existing metrics.
We develop a new deep neural network-based perceptual similarity metric.
arXiv Detail & Related papers (2022-07-27T17:55:04Z) - Rarity Score : A New Metric to Evaluate the Uncommonness of Synthesized
Images [32.94581354719927]
We propose a new evaluation metric, called rarity score', to measure the individual rarity of each image.
Code will be publicly available online for the research community.
arXiv Detail & Related papers (2022-06-17T05:16:16Z) - Learning an Adaptation Function to Assess Image Visual Similarities [0.0]
We focus here on the specific task of learning visual image similarities when analogy matters.
We propose to compare different supervised, semi-supervised and self-supervised networks, pre-trained on distinct scales and contents datasets.
Our experiments conducted on the Totally Looks Like image dataset highlight the interest of our method, by increasing the retrieval scores of the best model @1 by 2.25x.
arXiv Detail & Related papers (2022-06-03T07:15:00Z) - What Can You Learn from Your Muscles? Learning Visual Representation
from Human Interactions [50.435861435121915]
We use human interaction and attention cues to investigate whether we can learn better representations compared to visual-only representations.
Our experiments show that our "muscly-supervised" representation outperforms a visual-only state-of-the-art method MoCo.
arXiv Detail & Related papers (2020-10-16T17:46:53Z) - Image Quality Assessment: Unifying Structure and Texture Similarity [38.05659069533254]
We develop the first full-reference image quality model with explicit tolerance to texture resampling.
Using a convolutional neural network, we construct an injective and differentiable function that transforms images to overcomplete representations.
arXiv Detail & Related papers (2020-04-16T16:11:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.