Joint Learning of Portrait Intrinsic Decomposition and Relighting
- URL: http://arxiv.org/abs/2106.15305v1
- Date: Tue, 22 Jun 2021 19:40:22 GMT
- Title: Joint Learning of Portrait Intrinsic Decomposition and Relighting
- Authors: Mona Zehni, Shaona Ghosh, Krishna Sridhar, Sethu Raman
- Abstract summary: Inverse rendering is the problem of decomposing an image into its intrinsic components, i.e. albedo, normal and lighting.
Here, we propose a new self-supervised training paradigm that reduces the need for full supervision on the decomposition task.
We show-case the effectiveness of our training paradigm on both intrinsic decomposition and relighting and demonstrate how the model struggles in both tasks without the self-supervised loss terms.
- Score: 5.601217969637838
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inverse rendering is the problem of decomposing an image into its intrinsic
components, i.e. albedo, normal and lighting. To solve this ill-posed problem
from single image, state-of-the-art methods in shape from shading mostly resort
to supervised training on all the components on either synthetic or real
datasets. Here, we propose a new self-supervised training paradigm that 1)
reduces the need for full supervision on the decomposition task and 2) takes
into account the relighting task. We introduce new self-supervised loss terms
that leverage the consistencies between multi-lit images (images of the same
scene under different illuminations). Our approach is applicable to multi-lit
datasets. We apply our training approach in two settings: 1) train on a mixture
of synthetic and real data, 2) train on real datasets with limited supervision.
We show-case the effectiveness of our training paradigm on both intrinsic
decomposition and relighting and demonstrate how the model struggles in both
tasks without the self-supervised loss terms in limited supervision settings.
We provide results of comprehensive experiments on SfSNet, CelebA and Photoface
datasets and verify the performance of our approach on images in the wild.
Related papers
- Relighting from a Single Image: Datasets and Deep Intrinsic-based Architecture [0.7499722271664147]
Single image scene relighting aims to generate a realistic new version of an input image so that it appears to be illuminated by a new target light condition.
We propose two new datasets: a synthetic dataset with the ground truth of intrinsic components and a real dataset collected under laboratory conditions.
Our method outperforms the state-of-the-art methods in performance, as tested on both existing datasets and our newly developed datasets.
arXiv Detail & Related papers (2024-09-27T14:15:02Z) - Exposure Bracketing is All You Need for Unifying Image Restoration and Enhancement Tasks [50.822601495422916]
We propose to utilize exposure bracketing photography to unify image restoration and enhancement tasks.
Due to the difficulty in collecting real-world pairs, we suggest a solution that first pre-trains the model with synthetic paired data.
In particular, a temporally modulated recurrent network (TMRNet) and self-supervised adaptation method are proposed.
arXiv Detail & Related papers (2024-01-01T14:14:35Z) - Meta-Prior: Meta learning for Adaptive Inverse Problem Solvers [9.364509804053275]
Real-world imaging challenges often lack ground truth data, rendering traditional supervised approaches ineffective.
Our method trains a meta-model on a diverse set of imaging tasks that allows the model to be efficiently fine-tuned for specific tasks.
In simple settings, this approach recovers the Bayes optimal estimator, illustrating the soundness of our approach.
arXiv Detail & Related papers (2023-11-30T17:02:27Z) - Learning to Relight Portrait Images via a Virtual Light Stage and
Synthetic-to-Real Adaptation [76.96499178502759]
Relighting aims to re-illuminate the person in the image as if the person appeared in an environment with the target lighting.
Recent methods rely on deep learning to achieve high-quality results.
We propose a new approach that can perform on par with the state-of-the-art (SOTA) relighting methods without requiring a light stage.
arXiv Detail & Related papers (2022-09-21T17:15:58Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z) - Enhancing Low-Light Images in Real World via Cross-Image Disentanglement [58.754943762945864]
We propose a new low-light image enhancement dataset consisting of misaligned training images with real-world corruptions.
Our model achieves state-of-the-art performances on both the newly proposed dataset and other popular low-light datasets.
arXiv Detail & Related papers (2022-01-10T03:12:52Z) - Stereo Matching by Self-supervision of Multiscopic Vision [65.38359887232025]
We propose a new self-supervised framework for stereo matching utilizing multiple images captured at aligned camera positions.
A cross photometric loss, an uncertainty-aware mutual-supervision loss, and a new smoothness loss are introduced to optimize the network.
Our model obtains better disparity maps than previous unsupervised methods on the KITTI dataset.
arXiv Detail & Related papers (2021-04-09T02:58:59Z) - Bridging Composite and Real: Towards End-to-end Deep Image Matting [88.79857806542006]
We study the roles of semantics and details for image matting.
We propose a novel Glance and Focus Matting network (GFM), which employs a shared encoder and two separate decoders.
Comprehensive empirical studies have demonstrated that GFM outperforms state-of-the-art methods.
arXiv Detail & Related papers (2020-10-30T10:57:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.