Learning a Deep Color Difference Metric for Photographic Images
- URL: http://arxiv.org/abs/2303.14964v1
- Date: Mon, 27 Mar 2023 07:54:09 GMT
- Title: Learning a Deep Color Difference Metric for Photographic Images
- Authors: Haoyu Chen, Zhihua Wang, Yang Yang, Qilin Sun, Kede Ma
- Abstract summary: We learn a deep CD metric for photographic images with four desirable properties.
It computes accurate CDs between photographic images, differing mainly in color appearances.
We show that all these properties can be satisfied at once by learning a multi-scale autoregressive normalizing flow for feature transform.
- Score: 36.66506502182684
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most well-established and widely used color difference (CD) metrics are
handcrafted and subject-calibrated against uniformly colored patches, which do
not generalize well to photographic images characterized by natural scene
complexities. Constructing CD formulae for photographic images is still an
active research topic in imaging/illumination, vision science, and color
science communities. In this paper, we aim to learn a deep CD metric for
photographic images with four desirable properties. First, it well aligns with
the observations in vision science that color and form are linked inextricably
in visual cortical processing. Second, it is a proper metric in the
mathematical sense. Third, it computes accurate CDs between photographic
images, differing mainly in color appearances. Fourth, it is robust to mild
geometric distortions (e.g., translation or due to parallax), which are often
present in photographic images of the same scene captured by different digital
cameras. We show that all these properties can be satisfied at once by learning
a multi-scale autoregressive normalizing flow for feature transform, followed
by the Euclidean distance which is linearly proportional to the human
perceptual CD. Quantitative and qualitative experiments on the large-scale SPCD
dataset demonstrate the promise of the learned CD metric.
Related papers
- A Nerf-Based Color Consistency Method for Remote Sensing Images [0.5735035463793009]
We propose a NeRF-based method of color consistency for multi-view images, which weaves image features together using implicit expressions, and then re-illuminates feature space to generate a fusion image with a new perspective.
Experimental results show that the synthesize image generated by our method has excellent visual effect and smooth color transition at the edges.
arXiv Detail & Related papers (2024-11-08T13:26:07Z) - Multiscale Sliced Wasserstein Distances as Perceptual Color Difference Measures [34.8728594246521]
We describe a perceptual CD measure based on the multiscale sliced Wasserstein distance.
Experimental results indicate that our CD measure performs favorably in assessing CDs in photographic images.
Our measure functions as a metric in the mathematical sense, and show its promise as a loss function for image and video color transfer tasks.
arXiv Detail & Related papers (2024-07-14T12:48:16Z) - Curved Diffusion: A Generative Model With Optical Geometry Control [56.24220665691974]
The influence of different optical systems on the final scene appearance is frequently overlooked.
This study introduces a framework that intimately integrates a textto-image diffusion model with the particular lens used in image rendering.
arXiv Detail & Related papers (2023-11-29T13:06:48Z) - 4D LUT: Learnable Context-Aware 4D Lookup Table for Image Enhancement [50.49396123016185]
We propose a novel learnable context-aware 4-dimensional lookup table (4D LUT)
It achieves content-dependent enhancement of different contents in each image via adaptively learning of photo context.
Compared with traditional 3D LUT, i.e., RGB mapping to RGB, 4D LUT enables finer control of color transformations for pixels with different content in each image.
arXiv Detail & Related papers (2022-09-05T04:00:57Z) - Deep Metric Color Embeddings for Splicing Localization in Severely
Degraded Images [10.091921099426294]
We explore an alternative approach to splicing detection, which is potentially better suited for images in-the-wild.
We learn a deep metric space that is on one hand sensitive to illumination color and camera white-point estimation, but on the other hand insensitive to variations in object color.
In our evaluation, we show that the proposed embedding space outperforms the state of the art on images that have been subject to strong compression and downsampling.
arXiv Detail & Related papers (2022-06-21T21:28:40Z) - Measuring Perceptual Color Differences of Smartphone Photographs [55.9434603885868]
We put together the largest image dataset for perceptual CD assessment.
We make one of the first attempts to construct an end-to-end learnable CD formula based on a lightweight neural network.
arXiv Detail & Related papers (2022-05-26T16:57:04Z) - Learning Enriched Illuminants for Cross and Single Sensor Color
Constancy [182.4997117953705]
We propose cross-sensor self-supervised training to train the network.
We train the network by randomly sampling the artificial illuminants in a sensor-independent manner.
Experiments show that our cross-sensor model and single-sensor model outperform other state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2022-03-21T15:45:35Z) - Colour alignment for relative colour constancy via non-standard
references [11.92389176996629]
Relative colour constancy is an essential requirement for many scientific imaging applications.
We propose a colour alignment model that considers the camera image formation as a black-box.
It formulates colour alignment as a three-step process: camera response calibration, response linearisation, and colour matching.
arXiv Detail & Related papers (2021-12-30T15:58:55Z) - Single-shot Hyperspectral-Depth Imaging with Learned Diffractive Optics [72.9038524082252]
We propose a compact single-shot monocular hyperspectral-depth (HS-D) imaging method.
Our method uses a diffractive optical element (DOE), the point spread function of which changes with respect to both depth and spectrum.
To facilitate learning the DOE, we present a first HS-D dataset by building a benchtop HS-D imager.
arXiv Detail & Related papers (2020-09-01T14:19:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.