VIDIT: Virtual Image Dataset for Illumination Transfer
- URL: http://arxiv.org/abs/2005.05460v2
- Date: Wed, 13 May 2020 10:17:15 GMT
- Title: VIDIT: Virtual Image Dataset for Illumination Transfer
- Authors: Majed El Helou, Ruofan Zhou, Johan Barthas, Sabine S\"usstrunk
- Abstract summary: We present a novel dataset, the Virtual Image dataset for Illumination Transfer (VIDIT)
VIDIT contains 300 virtual scenes used for training, where every scene is captured 40 times in total: from 8 equally-spaced azimuthal angles, each lit with 5 different illuminants.
- Score: 18.001635516017902
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep image relighting is gaining more interest lately, as it allows photo
enhancement through illumination-specific retouching without human effort.
Aside from aesthetic enhancement and photo montage, image relighting is
valuable for domain adaptation, whether to augment datasets for training or to
normalize input test data. Accurate relighting is, however, very challenging
for various reasons, such as the difficulty in removing and recasting shadows
and the modeling of different surfaces. We present a novel dataset, the Virtual
Image Dataset for Illumination Transfer (VIDIT), in an effort to create a
reference evaluation benchmark and to push forward the development of
illumination manipulation methods. Virtual datasets are not only an important
step towards achieving real-image performance but have also proven capable of
improving training even when real datasets are possible to acquire and
available. VIDIT contains 300 virtual scenes used for training, where every
scene is captured 40 times in total: from 8 equally-spaced azimuthal angles,
each lit with 5 different illuminants.
Related papers
- Lite2Relight: 3D-aware Single Image Portrait Relighting [87.62069509622226]
Lite2Relight is a novel technique that can predict 3D consistent head poses of portraits.
By utilizing a pre-trained geometry-aware encoder and a feature alignment module, we map input images into a relightable 3D space.
This includes producing 3D-consistent results of the full head, including hair, eyes, and expressions.
arXiv Detail & Related papers (2024-07-15T07:16:11Z) - BVI-RLV: A Fully Registered Dataset and Benchmarks for Low-Light Video Enhancement [56.97766265018334]
This paper introduces a low-light video dataset, consisting of 40 scenes with various motion scenarios under two distinct low-lighting conditions.
We provide fully registered ground truth data captured in normal light using a programmable motorized dolly and refine it via an image-based approach for pixel-wise frame alignment across different light levels.
Our experimental results demonstrate the significance of fully registered video pairs for low-light video enhancement (LLVE) and the comprehensive evaluation shows that the models trained with our dataset outperform those trained with the existing datasets.
arXiv Detail & Related papers (2024-07-03T22:41:49Z) - Holo-Relighting: Controllable Volumetric Portrait Relighting from a Single Image [41.6305755298805]
Holo-Relighting is a volumetric relighting method capable of synthesizing novel viewpoints and novel lighting from a single image.
We design a relighting module conditioned on a given lighting to process these features, and predict a relit 3D representation in the form of a tri-plane.
Besides viewpoint and lighting control, Holo-Relighting also takes the head pose as a condition to enable head-pose-dependent lighting effects.
arXiv Detail & Related papers (2024-03-14T17:58:56Z) - LightSim: Neural Lighting Simulation for Urban Scenes [42.84064522536041]
Different outdoor illumination conditions drastically alter the appearance of urban scenes, and they can harm the performance of image-based robot perception systems.
Camera simulation provides a cost-effective solution to create a large dataset of images captured under different lighting conditions.
We propose LightSim, a neural lighting camera simulation system that enables diverse, realistic, and controllable data generation.
arXiv Detail & Related papers (2023-12-11T18:59:13Z) - Learning to Relight Portrait Images via a Virtual Light Stage and
Synthetic-to-Real Adaptation [76.96499178502759]
Relighting aims to re-illuminate the person in the image as if the person appeared in an environment with the target lighting.
Recent methods rely on deep learning to achieve high-quality results.
We propose a new approach that can perform on par with the state-of-the-art (SOTA) relighting methods without requiring a light stage.
arXiv Detail & Related papers (2022-09-21T17:15:58Z) - Neural Light Field Estimation for Street Scenes with Differentiable
Virtual Object Insertion [129.52943959497665]
Existing works on outdoor lighting estimation typically simplify the scene lighting into an environment map.
We propose a neural approach that estimates the 5D HDR light field from a single image.
We show the benefits of our AR object insertion in an autonomous driving application.
arXiv Detail & Related papers (2022-08-19T17:59:16Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z) - Enhancing Low-Light Images in Real World via Cross-Image Disentanglement [58.754943762945864]
We propose a new low-light image enhancement dataset consisting of misaligned training images with real-world corruptions.
Our model achieves state-of-the-art performances on both the newly proposed dataset and other popular low-light datasets.
arXiv Detail & Related papers (2022-01-10T03:12:52Z) - PX-NET: Simple and Efficient Pixel-Wise Training of Photometric Stereo
Networks [26.958763133729846]
Retrieving accurate 3D reconstructions of objects from the way they reflect light is a very challenging task in computer vision.
We propose a novel pixel-wise training procedure for normal prediction by replacing the training data (observation maps) of globally rendered images with independent per-pixel generated data.
Our network, PX-NET, achieves the state-of-the-art performance compared to other pixelwise methods on synthetic datasets.
arXiv Detail & Related papers (2020-08-11T18:03:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.