Colour alignment for relative colour constancy via non-standard
references
- URL: http://arxiv.org/abs/2112.15106v1
- Date: Thu, 30 Dec 2021 15:58:55 GMT
- Title: Colour alignment for relative colour constancy via non-standard
references
- Authors: Yunfeng Zhao, Stuart Ferguson, Huiyu Zhou, Chris Elliott and Karen
Rafferty
- Abstract summary: Relative colour constancy is an essential requirement for many scientific imaging applications.
We propose a colour alignment model that considers the camera image formation as a black-box.
It formulates colour alignment as a three-step process: camera response calibration, response linearisation, and colour matching.
- Score: 11.92389176996629
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Relative colour constancy is an essential requirement for many scientific
imaging applications. However, most digital cameras differ in their image
formations and native sensor output is usually inaccessible, e.g., in
smartphone camera applications. This makes it hard to achieve consistent colour
assessment across a range of devices, and that undermines the performance of
computer vision algorithms. To resolve this issue, we propose a colour
alignment model that considers the camera image formation as a black-box and
formulates colour alignment as a three-step process: camera response
calibration, response linearisation, and colour matching. The proposed model
works with non-standard colour references, i.e., colour patches without knowing
the true colour values, by utilising a novel balance-of-linear-distances
feature. It is equivalent to determining the camera parameters through an
unsupervised process. It also works with a minimum number of corresponding
colour patches across the images to be colour aligned to deliver the applicable
processing. Two challenging image datasets collected by multiple cameras under
various illumination and exposure conditions were used to evaluate the model.
Performance benchmarks demonstrated that our model achieved superior
performance compared to other popular and state-of-the-art methods.
Related papers
- Multiscale Sliced Wasserstein Distances as Perceptual Color Difference Measures [34.8728594246521]
We describe a perceptual CD measure based on the multiscale sliced Wasserstein distance.
Experimental results indicate that our CD measure performs favorably in assessing CDs in photographic images.
Our measure functions as a metric in the mathematical sense, and show its promise as a loss function for image and video color transfer tasks.
arXiv Detail & Related papers (2024-07-14T12:48:16Z) - Control Color: Multimodal Diffusion-based Interactive Image Colorization [81.68817300796644]
Control Color (Ctrl Color) is a multi-modal colorization method that leverages the pre-trained Stable Diffusion (SD) model.
We present an effective way to encode user strokes to enable precise local color manipulation.
We also introduce a novel module based on self-attention and a content-guided deformable autoencoder to address the long-standing issues of color overflow and inaccurate coloring.
arXiv Detail & Related papers (2024-02-16T17:51:13Z) - Deep Learning for Camera Calibration and Beyond: A Survey [100.75060862015945]
Camera calibration involves estimating camera parameters to infer geometric features from captured sequences.
Recent efforts show that learning-based solutions have the potential to be used in place of the repeatability works of manual calibrations.
arXiv Detail & Related papers (2023-03-19T04:00:05Z) - Name Your Colour For the Task: Artificially Discover Colour Naming via
Colour Quantisation Transformer [62.75343115345667]
We propose a novel colour quantisation transformer, CQFormer, that quantises colour space while maintaining machine recognition on the quantised images.
We observe the consistent evolution pattern between our artificial colour system and basic colour terms across human languages.
Our colour quantisation method also offers an efficient quantisation method that effectively compresses the image storage.
arXiv Detail & Related papers (2022-12-07T03:39:18Z) - Cross-Camera Deep Colorization [10.254243409261898]
We propose an end-to-end convolutional neural network to align and fuse images from a color-plus-mono dual-camera system.
Our method consistently achieves substantial improvements, i.e., around 10dB PSNR gain.
arXiv Detail & Related papers (2022-08-26T11:02:14Z) - High Dynamic Range and Super-Resolution from Raw Image Bursts [52.341483902624006]
This paper introduces the first approach to reconstruct high-resolution, high-dynamic range color images from raw photographic bursts captured by a handheld camera with exposure bracketing.
The proposed algorithm is fast, with low memory requirements compared to state-of-the-art learning-based approaches to image restoration.
Experiments demonstrate its excellent performance with super-resolution factors of up to $times 4$ on real photographs taken in the wild with hand-held cameras.
arXiv Detail & Related papers (2022-07-29T13:31:28Z) - Deep Metric Color Embeddings for Splicing Localization in Severely
Degraded Images [10.091921099426294]
We explore an alternative approach to splicing detection, which is potentially better suited for images in-the-wild.
We learn a deep metric space that is on one hand sensitive to illumination color and camera white-point estimation, but on the other hand insensitive to variations in object color.
In our evaluation, we show that the proposed embedding space outperforms the state of the art on images that have been subject to strong compression and downsampling.
arXiv Detail & Related papers (2022-06-21T21:28:40Z) - Learning Enriched Illuminants for Cross and Single Sensor Color
Constancy [182.4997117953705]
We propose cross-sensor self-supervised training to train the network.
We train the network by randomly sampling the artificial illuminants in a sensor-independent manner.
Experiments show that our cross-sensor model and single-sensor model outperform other state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2022-03-21T15:45:35Z) - Transform your Smartphone into a DSLR Camera: Learning the ISP in the
Wild [159.71025525493354]
We propose a trainable Image Signal Processing framework that produces DSLR quality images given RAW images captured by a smartphone.
To address the color misalignments between training image pairs, we employ a color-conditional ISP network and optimize a novel parametric color mapping between each input RAW and reference DSLR image.
arXiv Detail & Related papers (2022-03-20T20:13:59Z) - Image color correction, enhancement, and editing [14.453616946103132]
We study the color correction problem from the standpoint of the camera's image signal processor (ISP)
In particular, we propose auto image recapture methods to generate different realistic versions of the same camera-rendered image with new colors.
arXiv Detail & Related papers (2021-07-28T01:14:12Z) - Semi-Supervised Raw-to-Raw Mapping [19.783856963405754]
The raw-RGB colors of a camera sensor vary due to the spectral sensitivity differences across different sensor makes and models.
We present a semi-supervised raw-to-raw mapping method trained on a small set of paired images alongside an unpaired set of images captured by each camera device.
arXiv Detail & Related papers (2021-06-25T21:01:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.