Adversarial Texture Optimization from RGB-D Scans
- URL: http://arxiv.org/abs/2003.08400v1
- Date: Wed, 18 Mar 2020 18:00:05 GMT
- Title: Adversarial Texture Optimization from RGB-D Scans
- Authors: Jingwei Huang, Justus Thies, Angela Dai, Abhijit Kundu, Chiyu Max
Jiang, Leonidas Guibas, Matthias Nie{\ss}ner, Thomas Funkhouser
- Abstract summary: We present a novel approach for color texture generation using a conditional adversarial loss obtained from weakly-supervised views.
The key idea of our approach is to learn a patch-based conditional discriminator which guides the texture optimization to be tolerant to misalignments.
- Score: 37.78810126921875
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Realistic color texture generation is an important step in RGB-D surface
reconstruction, but remains challenging in practice due to inaccuracies in
reconstructed geometry, misaligned camera poses, and view-dependent imaging
artifacts.
In this work, we present a novel approach for color texture generation using
a conditional adversarial loss obtained from weakly-supervised views.
Specifically, we propose an approach to produce photorealistic textures for
approximate surfaces, even from misaligned images, by learning an objective
function that is robust to these errors.
The key idea of our approach is to learn a patch-based conditional
discriminator which guides the texture optimization to be tolerant to
misalignments.
Our discriminator takes a synthesized view and a real image, and evaluates
whether the synthesized one is realistic, under a broadened definition of
realism.
We train the discriminator by providing as `real' examples pairs of input
views and their misaligned versions -- so that the learned adversarial loss
will tolerate errors from the scans.
Experiments on synthetic and real data under quantitative or qualitative
evaluation demonstrate the advantage of our approach in comparison to state of
the art. Our code is publicly available with video demonstration.
Related papers
- ENTED: Enhanced Neural Texture Extraction and Distribution for
Reference-based Blind Face Restoration [51.205673783866146]
We present ENTED, a new framework for blind face restoration that aims to restore high-quality and realistic portrait images.
We utilize a texture extraction and distribution framework to transfer high-quality texture features between the degraded input and reference image.
The StyleGAN-like architecture in our framework requires high-quality latent codes to generate realistic images.
arXiv Detail & Related papers (2024-01-13T04:54:59Z) - TexPose: Neural Texture Learning for Self-Supervised 6D Object Pose
Estimation [55.94900327396771]
We introduce neural texture learning for 6D object pose estimation from synthetic data.
We learn to predict realistic texture of objects from real image collections.
We learn pose estimation from pixel-perfect synthetic data.
arXiv Detail & Related papers (2022-12-25T13:36:32Z) - Person Image Synthesis via Denoising Diffusion Model [116.34633988927429]
We show how denoising diffusion models can be applied for high-fidelity person image synthesis.
Our results on two large-scale benchmarks and a user study demonstrate the photorealism of our proposed approach under challenging scenarios.
arXiv Detail & Related papers (2022-11-22T18:59:50Z) - Intrinsic Decomposition of Document Images In-the-Wild [28.677728405031782]
We present a learning-based method that directly estimates document reflectance based on intrinsic image formation.
The proposed architecture works in a self-supervised manner where only the synthetic texture is used as a weak training signal.
Our reflectance estimation scheme, when used as a pre-processing step of an OCR pipeline, shows a 26% improvement of character error rate.
arXiv Detail & Related papers (2020-11-29T21:39:58Z) - Dynamic Object Removal and Spatio-Temporal RGB-D Inpainting via
Geometry-Aware Adversarial Learning [9.150245363036165]
Dynamic objects have a significant impact on the robot's perception of the environment.
In this work, we address this problem by synthesizing plausible color, texture and geometry in regions occluded by dynamic objects.
We optimize our architecture using adversarial training to synthesize fine realistic textures which enables it to hallucinate color and depth structure in occluded regions online.
arXiv Detail & Related papers (2020-08-12T01:23:21Z) - Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image
Decomposition [67.9464567157846]
We propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties.
Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-to-image translation baselines both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-06-29T12:53:58Z) - Deep CG2Real: Synthetic-to-Real Translation via Image Disentanglement [78.58603635621591]
Training an unpaired synthetic-to-real translation network in image space is severely under-constrained.
We propose a semi-supervised approach that operates on the disentangled shading and albedo layers of the image.
Our two-stage pipeline first learns to predict accurate shading in a supervised fashion using physically-based renderings as targets.
arXiv Detail & Related papers (2020-03-27T21:45:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.