Deep Graphics Encoder for Real-Time Video Makeup Synthesis from Example
- URL: http://arxiv.org/abs/2105.06407v1
- Date: Wed, 12 May 2021 08:28:32 GMT
- Title: Deep Graphics Encoder for Real-Time Video Makeup Synthesis from Example
- Authors: Robin Kips, Ruowei Jiang, Sileye Ba, Edmund Phung, Parham Aarabi,
Pietro Gori, Matthieu Perrot, Isabelle Bloch
- Abstract summary: We introduce an inverse computer graphics method for automatic makeup synthesis from a reference image.
This method can be used by artists to automatically create realistic virtual cosmetics image samples, or by consumers to virtually try-on a makeup extracted from their favorite reference image.
- Score: 11.377850833795494
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: While makeup virtual-try-on is now widespread, parametrizing a computer
graphics rendering engine for synthesizing images of a given cosmetics product
remains a challenging task. In this paper, we introduce an inverse computer
graphics method for automatic makeup synthesis from a reference image, by
learning a model that maps an example portrait image with makeup to the space
of rendering parameters. This method can be used by artists to automatically
create realistic virtual cosmetics image samples, or by consumers, to virtually
try-on a makeup extracted from their favorite reference image.
Related papers
- Computer vision training dataset generation for robotic environments using Gaussian splatting [0.0]
This paper introduces a novel pipeline for generating large-scale, highly realistic, and automatically labeled datasets for computer vision tasks in robotic environments.<n>We leverage 3D Gaussian Splatting (3DGS) to create photorealistic representations of the operational environment and objects.<n>A novel, two-pass rendering technique combines the realism of splats with a shadow map generated from proxy meshes.<n> Pixel-perfect segmentation masks are generated automatically and formatted for direct use with object detection models like YOLO.
arXiv Detail & Related papers (2025-12-15T15:00:17Z) - DiffPhysCam: Differentiable Physics-Based Camera Simulation for Inverse Rendering and Embodied AI [0.49157446832511503]
DiffPhysCam is a differentiable camera simulator designed to support robotics and embodied AI applications.<n>Differentiable rendering allows inverse reconstruction of real-world scenes as digital twins.<n>We show that DiffPhysCam enhances robotic perception performance in synthetic image tasks.
arXiv Detail & Related papers (2025-08-12T10:38:20Z) - AvatarMakeup: Realistic Makeup Transfer for 3D Animatable Head Avatars [89.31582684550723]
AvatarMakeup achieves state-of-the-art makeup transfer quality and consistency throughout animation.<n>Coherent Duplication optimize a global UV map by recoding the averaged facial attributes among the generated makeup images.<n>Experiments demonstrate that AvatarMakeup achieves state-of-the-art makeup transfer quality and consistency throughout animation.
arXiv Detail & Related papers (2025-07-03T08:26:57Z) - Alfie: Democratising RGBA Image Generation With No $$$ [33.334956022229846]
We propose a fully-automated approach for obtaining RGBA illustrations by modifying the inference-time behavior of a pre-trained Diffusion Transformer model.
We force the generation of entire subjects without sharp croppings, whose background is easily removed for seamless integration into design projects or artistic scenes.
arXiv Detail & Related papers (2024-08-27T07:13:44Z) - Unsupervised Traffic Scene Generation with Synthetic 3D Scene Graphs [83.9783063609389]
We propose a method based on domain-invariant scene representation to directly synthesize traffic scene imagery without rendering.
Specifically, we rely on synthetic scene graphs as our internal representation and introduce an unsupervised neural network architecture for realistic traffic scene synthesis.
arXiv Detail & Related papers (2023-03-15T09:26:29Z) - BakedSDF: Meshing Neural SDFs for Real-Time View Synthesis [42.93055827628597]
We present a method for reconstructing high-quality meshes of large real-world scenes suitable for photorealistic novel view synthesis.
We first optimize a hybrid neural volume-surface scene representation designed to have well-behaved level sets that correspond to surfaces in the scene.
We then bake this representation into a high-quality triangle mesh, which we equip with a simple and fast view-dependent appearance model based on spherical Gaussians.
arXiv Detail & Related papers (2023-02-28T18:58:03Z) - Real-time Virtual-Try-On from a Single Example Image through Deep
Inverse Graphics and Learned Differentiable Renderers [13.894134334543363]
We propose a novel framework based on deep learning to build a real-time inverse graphics encoder.
Our imitator is a generative network that learns to accurately reproduce the behavior of a given non-differentiable image.
Our framework enables novel applications where consumers can virtually try-on a novel unknown product from an inspirational reference image.
arXiv Detail & Related papers (2022-05-12T18:44:00Z) - Realistic Image Synthesis with Configurable 3D Scene Layouts [59.872657806747576]
We propose a novel approach to realistic-looking image synthesis based on a 3D scene layout.
Our approach takes a 3D scene with semantic class labels as input and trains a 3D scene painting network.
With the trained painting network, realistic-looking images for the input 3D scene can be rendered and manipulated.
arXiv Detail & Related papers (2021-08-23T09:44:56Z) - Inverting Generative Adversarial Renderer for Face Reconstruction [58.45125455811038]
In this work, we introduce a novel Generative Adversa Renderer (GAR)
GAR learns to model the complicated real-world image, instead of relying on the graphics rules, it is capable of producing realistic images.
Our method achieves state-of-the-art performances on multiple face reconstruction.
arXiv Detail & Related papers (2021-05-06T04:16:06Z) - Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image
Decomposition [67.9464567157846]
We propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties.
Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-to-image translation baselines both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-06-29T12:53:58Z) - State of the Art on Neural Rendering [141.22760314536438]
We focus on approaches that combine classic computer graphics techniques with deep generative models to obtain controllable and photo-realistic outputs.
This report is focused on the many important use cases for the described algorithms such as novel view synthesis, semantic photo manipulation, facial and body reenactment, relighting, free-viewpoint video, and the creation of photo-realistic avatars for virtual and augmented reality telepresence.
arXiv Detail & Related papers (2020-04-08T04:36:31Z) - Learning Inverse Rendering of Faces from Real-world Videos [52.313931830408386]
Existing methods decompose a face image into three components (albedo, normal, and illumination) by supervised training on synthetic data.
We propose a weakly supervised training approach to train our model on real face videos, based on the assumption of consistency of albedo and normal.
Our network is trained on both real and synthetic data, benefiting from both.
arXiv Detail & Related papers (2020-03-26T17:26:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.