PhotoApp: Photorealistic Appearance Editing of Head Portraits
- URL: http://arxiv.org/abs/2103.07658v1
- Date: Sat, 13 Mar 2021 08:59:49 GMT
- Title: PhotoApp: Photorealistic Appearance Editing of Head Portraits
- Authors: Mallikarjun B R, Ayush Tewari, Abdallah Dib, Tim Weyrich, Bernd
Bickel, Hans-Peter Seidel, Hanspeter Pfister, Wojciech Matusik, Louis
Chevallier, Mohamed Elgharib, Christian Theobalt
- Abstract summary: We present an approach for high-quality intuitive editing of the camera viewpoint and scene illumination in a portrait image.
Most editing approaches rely on supervised learning using training data captured with setups such as light and camera stages.
We design a supervised problem which learns in the latent space of StyleGAN.
This combines the best of supervised learning and generative adversarial modeling.
- Score: 97.23638022484153
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Photorealistic editing of portraits is a challenging task as humans are very
sensitive to inconsistencies in faces. We present an approach for high-quality
intuitive editing of the camera viewpoint and scene illumination in a portrait
image. This requires our method to capture and control the full reflectance
field of the person in the image. Most editing approaches rely on supervised
learning using training data captured with setups such as light and camera
stages. Such datasets are expensive to acquire, not readily available and do
not capture all the rich variations of in-the-wild portrait images. In
addition, most supervised approaches only focus on relighting, and do not allow
camera viewpoint editing. Thus, they only capture and control a subset of the
reflectance field. Recently, portrait editing has been demonstrated by
operating in the generative model space of StyleGAN. While such approaches do
not require direct supervision, there is a significant loss of quality when
compared to the supervised approaches. In this paper, we present a method which
learns from limited supervised training data. The training images only include
people in a fixed neutral expression with eyes closed, without much hair or
background variations. Each person is captured under 150 one-light-at-a-time
conditions and under 8 camera poses. Instead of training directly in the image
space, we design a supervised problem which learns transformations in the
latent space of StyleGAN. This combines the best of supervised learning and
generative adversarial modeling. We show that the StyleGAN prior allows for
generalisation to different expressions, hairstyles and backgrounds. This
produces high-quality photorealistic results for in-the-wild images and
significantly outperforms existing methods. Our approach can edit the
illumination and pose simultaneously, and runs at interactive rates.
Related papers
- NoPe-NeRF: Optimising Neural Radiance Field with No Pose Prior [22.579857008706206]
Training a Neural Radiance Field (NeRF) without pre-computed camera poses is challenging.
Recent advances in this direction demonstrate the possibility of jointly optimising a NeRF and camera poses in forward-facing scenes.
We tackle this challenging problem by incorporating undistorted monocular depth priors.
These priors are generated by correcting scale and shift parameters during training, with which we are then able to constrain the relative poses between consecutive frames.
arXiv Detail & Related papers (2022-12-14T18:16:41Z) - SAMURAI: Shape And Material from Unconstrained Real-world Arbitrary
Image collections [49.3480550339732]
Inverse rendering of an object under entirely unknown capture conditions is a fundamental challenge in computer vision and graphics.
We propose a joint optimization framework to estimate the shape, BRDF, and per-image camera pose and illumination.
Our method works on in-the-wild online image collections of an object and produces relightable 3D assets for several use-cases such as AR/VR.
arXiv Detail & Related papers (2022-05-31T13:16:48Z) - SunStage: Portrait Reconstruction and Relighting using the Sun as a
Light Stage [75.0473791925894]
A light stage uses a series of calibrated cameras and lights to capture a subject's facial appearance under varying illumination and viewpoint.
Unfortunately, light stages are often inaccessible: they are expensive and require significant technical expertise for construction and operation.
We present SunStage: a lightweight alternative to a light stage that captures comparable data using only a smartphone camera and the sun.
arXiv Detail & Related papers (2022-04-07T17:59:51Z) - Neural Radiance Fields for Outdoor Scene Relighting [70.97747511934705]
We present NeRF-OSR, the first approach for outdoor scene relighting based on neural radiance fields.
In contrast to the prior art, our technique allows simultaneous editing of both scene illumination and camera viewpoint.
It also includes a dedicated network for shadow reproduction, which is crucial for high-quality outdoor scene relighting.
arXiv Detail & Related papers (2021-12-09T18:59:56Z) - Self-supervised Outdoor Scene Relighting [92.20785788740407]
We propose a self-supervised approach for relighting.
Our approach is trained only on corpora of images collected from the internet without any user-supervision.
Results show the ability of our technique to produce photo-realistic and physically plausible results, that generalizes to unseen scenes.
arXiv Detail & Related papers (2021-07-07T09:46:19Z) - PIE: Portrait Image Embedding for Semantic Control [82.69061225574774]
We present the first approach for embedding real portrait images in the latent space of StyleGAN.
We use StyleRig, a pretrained neural network that maps the control space of a 3D morphable face model to the latent space of the GAN.
An identity energy preservation term allows spatially coherent edits while maintaining facial integrity.
arXiv Detail & Related papers (2020-09-20T17:53:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.