SepLUT: Separable Image-adaptive Lookup Tables for Real-time Image
Enhancement
- URL: http://arxiv.org/abs/2207.08351v1
- Date: Mon, 18 Jul 2022 02:27:19 GMT
- Title: SepLUT: Separable Image-adaptive Lookup Tables for Real-time Image
Enhancement
- Authors: Canqian Yang, Meiguang Jin, Yi Xu, Rui Zhang, Ying Chen and Huaida Liu
- Abstract summary: We present SepLUT (separable image-adaptive lookup table) to tackle the above limitations.
Specifically, we separate a single color transform into a cascade of component-independent and component-correlated sub-transforms instantiated as 1D and 3D LUTs.
In this way, the capabilities of two sub-transforms can facilitate each other, where the 3D LUT complements the ability to mix up color components.
- Score: 21.963622337032344
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image-adaptive lookup tables (LUTs) have achieved great success in real-time
image enhancement tasks due to their high efficiency for modeling color
transforms. However, they embed the complete transform, including the color
component-independent and the component-correlated parts, into only a single
type of LUTs, either 1D or 3D, in a coupled manner. This scheme raises a
dilemma of improving model expressiveness or efficiency due to two factors. On
the one hand, the 1D LUTs provide high computational efficiency but lack the
critical capability of color components interaction. On the other, the 3D LUTs
present enhanced component-correlated transform capability but suffer from
heavy memory footprint, high training difficulty, and limited cell utilization.
Inspired by the conventional divide-and-conquer practice in the image signal
processor, we present SepLUT (separable image-adaptive lookup table) to tackle
the above limitations. Specifically, we separate a single color transform into
a cascade of component-independent and component-correlated sub-transforms
instantiated as 1D and 3D LUTs, respectively. In this way, the capabilities of
two sub-transforms can facilitate each other, where the 3D LUT complements the
ability to mix up color components, and the 1D LUT redistributes the input
colors to increase the cell utilization of the 3D LUT and thus enable the use
of a more lightweight 3D LUT. Experiments demonstrate that the proposed method
presents enhanced performance on photo retouching benchmark datasets than the
current state-of-the-art and achieves real-time processing on both GPUs and
CPUs.
Related papers
- SpecGaussian with Latent Features: A High-quality Modeling of the View-dependent Appearance for 3D Gaussian Splatting [11.978842116007563]
Lantent-SpecGS is an approach that utilizes a universal latent neural descriptor within each 3D Gaussian.
Two parallel CNNs are designed to decoder the splatting feature maps into diffuse color and specular color separately.
A mask that depends on the viewpoint is learned to merge these two colors, resulting in the final rendered image.
arXiv Detail & Related papers (2024-08-23T15:25:08Z) - WE-GS: An In-the-wild Efficient 3D Gaussian Representation for Unconstrained Photo Collections [8.261637198675151]
Novel View Synthesis (NVS) from unconstrained photo collections is challenging in computer graphics.
We propose an efficient point-based differentiable rendering framework for scene reconstruction from photo collections.
Our approach outperforms existing approaches on the rendering quality of novel view and appearance synthesis with high converge and rendering speed.
arXiv Detail & Related papers (2024-06-04T15:17:37Z) - SERF: Fine-Grained Interactive 3D Segmentation and Editing with Radiance Fields [92.14328581392633]
We introduce a novel fine-grained interactive 3D segmentation and editing algorithm with radiance fields, which we refer to as SERF.
Our method entails creating a neural mesh representation by integrating multi-view algorithms with pre-trained 2D models.
Building upon this representation, we introduce a novel surface rendering technique that preserves local information and is robust to deformation.
arXiv Detail & Related papers (2023-12-26T02:50:42Z) - Learning Naturally Aggregated Appearance for Efficient 3D Editing [94.47518916521065]
We propose to replace the color field with an explicit 2D appearance aggregation, also called canonical image.
To avoid the distortion effect and facilitate convenient editing, we complement the canonical image with a projection field that maps 3D points onto 2D pixels for texture lookup.
Our representation, dubbed AGAP, well supports various ways of 3D editing (e.g., stylization, interactive drawing, and content extraction) with no need of re-optimization.
arXiv Detail & Related papers (2023-12-11T18:59:31Z) - Feature 3DGS: Supercharging 3D Gaussian Splatting to Enable Distilled Feature Fields [54.482261428543985]
Methods that use Neural Radiance fields are versatile for traditional tasks such as novel view synthesis.
3D Gaussian splatting has shown state-of-the-art performance on real-time radiance field rendering.
We propose architectural and training changes to efficiently avert this problem.
arXiv Detail & Related papers (2023-12-06T00:46:30Z) - Guide3D: Create 3D Avatars from Text and Image Guidance [55.71306021041785]
Guide3D is a text-and-image-guided generative model for 3D avatar generation based on diffusion models.
Our framework produces topologically and structurally correct geometry and high-resolution textures.
arXiv Detail & Related papers (2023-08-18T17:55:47Z) - Cross-Modal 3D Shape Generation and Manipulation [62.50628361920725]
We propose a generic multi-modal generative model that couples the 2D modalities and implicit 3D representations through shared latent spaces.
We evaluate our framework on two representative 2D modalities of grayscale line sketches and rendered color images.
arXiv Detail & Related papers (2022-07-24T19:22:57Z) - AdaInt: Learning Adaptive Intervals for 3D Lookup Tables on Real-time
Image Enhancement [28.977992864519948]
We present AdaInt, a novel mechanism to achieve a more flexible sampling point allocation by adaptively learning the non-uniform sampling intervals in the 3D color space.
AdaInt could be implemented as a compact and efficient plug-and-play module for a 3D LUT-based method.
arXiv Detail & Related papers (2022-04-29T10:16:57Z) - Learning Image-adaptive 3D Lookup Tables for High Performance Photo
Enhancement in Real-time [33.93249921871407]
In this paper, we learn image-adaptive 3-dimensional lookup tables (3D LUTs) to achieve fast and robust photo enhancement.
We learn 3D LUTs from annotated data using pairwise or unpaired learning.
We learn multiple basis 3D LUTs and a small convolutional neural network (CNN) simultaneously in an end-to-end manner.
arXiv Detail & Related papers (2020-09-30T06:34:57Z) - Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image
Decomposition [67.9464567157846]
We propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties.
Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-to-image translation baselines both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-06-29T12:53:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.