4D LUT: Learnable Context-Aware 4D Lookup Table for Image Enhancement
- URL: http://arxiv.org/abs/2209.01749v1
- Date: Mon, 5 Sep 2022 04:00:57 GMT
- Title: 4D LUT: Learnable Context-Aware 4D Lookup Table for Image Enhancement
- Authors: Chengxu Liu, Huan Yang, Jianlong Fu, Xueming Qian
- Abstract summary: We propose a novel learnable context-aware 4-dimensional lookup table (4D LUT)
It achieves content-dependent enhancement of different contents in each image via adaptively learning of photo context.
Compared with traditional 3D LUT, i.e., RGB mapping to RGB, 4D LUT enables finer control of color transformations for pixels with different content in each image.
- Score: 50.49396123016185
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image enhancement aims at improving the aesthetic visual quality of photos by
retouching the color and tone, and is an essential technology for professional
digital photography. Recent years deep learning-based image enhancement
algorithms have achieved promising performance and attracted increasing
popularity. However, typical efforts attempt to construct a uniform enhancer
for all pixels' color transformation. It ignores the pixel differences between
different content (e.g., sky, ocean, etc.) that are significant for
photographs, causing unsatisfactory results. In this paper, we propose a novel
learnable context-aware 4-dimensional lookup table (4D LUT), which achieves
content-dependent enhancement of different contents in each image via
adaptively learning of photo context. In particular, we first introduce a
lightweight context encoder and a parameter encoder to learn a context map for
the pixel-level category and a group of image-adaptive coefficients,
respectively. Then, the context-aware 4D LUT is generated by integrating
multiple basis 4D LUTs via the coefficients. Finally, the enhanced image can be
obtained by feeding the source image and context map into fused context-aware
4D~LUT via quadrilinear interpolation. Compared with traditional 3D LUT, i.e.,
RGB mapping to RGB, which is usually used in camera imaging pipeline systems or
tools, 4D LUT, i.e., RGBC(RGB+Context) mapping to RGB, enables finer control of
color transformations for pixels with different content in each image, even
though they have the same RGB values. Experimental results demonstrate that our
method outperforms other state-of-the-art methods in widely-used benchmarks.
Related papers
- Simple Image Signal Processing using Global Context Guidance [56.41827271721955]
Deep learning-based ISPs aim to transform RAW images into DSLR-like RGB images using deep neural networks.
We propose a novel module that can be integrated into any neural ISP to capture the global context information from the full RAW images.
Our model achieves state-of-the-art results on different benchmarks using diverse and real smartphone images.
arXiv Detail & Related papers (2024-04-17T17:11:47Z) - Enhancing RAW-to-sRGB with Decoupled Style Structure in Fourier Domain [27.1716081216131]
Current methods ignore the difference between cell phone RAW images and DSLR camera RGB images.
We present a novel Neural ISP framework, named FourierISP.
This approach breaks the image down into style and structure within the frequency domain, allowing for independent optimization.
arXiv Detail & Related papers (2024-01-04T09:18:31Z) - Transform your Smartphone into a DSLR Camera: Learning the ISP in the
Wild [159.71025525493354]
We propose a trainable Image Signal Processing framework that produces DSLR quality images given RAW images captured by a smartphone.
To address the color misalignments between training image pairs, we employ a color-conditional ISP network and optimize a novel parametric color mapping between each input RAW and reference DSLR image.
arXiv Detail & Related papers (2022-03-20T20:13:59Z) - Saliency Enhancement using Superpixel Similarity [77.34726150561087]
Saliency Object Detection (SOD) has several applications in image analysis.
Deep-learning-based SOD methods are among the most effective, but they may miss foreground parts with similar colors.
We introduce a post-processing method, named textitSaliency Enhancement over Superpixel Similarity (SESS)
We demonstrate that SESS can consistently and considerably improve the results of three deep-learning-based SOD methods on five image datasets.
arXiv Detail & Related papers (2021-12-01T17:22:54Z) - RGB-D Image Inpainting Using Generative Adversarial Network with a Late
Fusion Approach [14.06830052027649]
Diminished reality is a technology that aims to remove objects from video images and fills in the missing region with plausible pixels.
We propose an RGB-D image inpainting method using generative adversarial network.
arXiv Detail & Related papers (2021-10-14T14:44:01Z) - Colored Point Cloud to Image Alignment [15.828285556159026]
We introduce a differential optimization method that aligns a colored point cloud to a given color image via iterative geometric and color matching.
We find the transformation between the camera image and the point cloud colors by iterating between matching the relative location of the point cloud and matching colors.
arXiv Detail & Related papers (2021-10-07T08:12:56Z) - Learning Image-adaptive 3D Lookup Tables for High Performance Photo
Enhancement in Real-time [33.93249921871407]
In this paper, we learn image-adaptive 3-dimensional lookup tables (3D LUTs) to achieve fast and robust photo enhancement.
We learn 3D LUTs from annotated data using pairwise or unpaired learning.
We learn multiple basis 3D LUTs and a small convolutional neural network (CNN) simultaneously in an end-to-end manner.
arXiv Detail & Related papers (2020-09-30T06:34:57Z) - Geometric Correspondence Fields: Learned Differentiable Rendering for 3D
Pose Refinement in the Wild [96.09941542587865]
We present a novel 3D pose refinement approach based on differentiable rendering for objects of arbitrary categories in the wild.
In this way, we precisely align 3D models to objects in RGB images which results in significantly improved 3D pose estimates.
We evaluate our approach on the challenging Pix3D dataset and achieve up to 55% relative improvement compared to state-of-the-art refinement methods in multiple metrics.
arXiv Detail & Related papers (2020-07-17T12:34:38Z) - 3D Photography using Context-aware Layered Depth Inpainting [50.66235795163143]
We propose a method for converting a single RGB-D input image into a 3D photo.
A learning-based inpainting model synthesizes new local color-and-depth content into the occluded region.
The resulting 3D photos can be efficiently rendered with motion parallax.
arXiv Detail & Related papers (2020-04-09T17:59:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.