Inpainting Normal Maps for Lightstage data
- URL: http://arxiv.org/abs/2401.08099v1
- Date: Tue, 16 Jan 2024 03:59:07 GMT
- Title: Inpainting Normal Maps for Lightstage data
- Authors: Hancheng Zuo and Bernard Tiddeman
- Abstract summary: This study introduces a novel method for inpainting normal maps using a generative adversarial network (GAN)
Our approach extends previous general image inpainting techniques, employing a bow tie-like generator network and a discriminator network, with alternating training phases.
Our findings suggest that the proposed model effectively generates high-quality, realistic inpainted normal maps, suitable for performance capture applications.
- Score: 3.1002416427168304
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study introduces a novel method for inpainting normal maps using a
generative adversarial network (GAN). Normal maps, often derived from a
lightstage, are crucial in performance capture but can have obscured areas due
to movement (e.g., by arms, hair, or props). Inpainting fills these missing
areas with plausible data. Our approach extends previous general image
inpainting techniques, employing a bow tie-like generator network and a
discriminator network, with alternating training phases. The generator aims to
synthesize images aligning with the ground truth and deceive the discriminator,
which differentiates between real and processed images. Periodically, the
discriminator undergoes retraining to enhance its ability to identify processed
images. Importantly, our method adapts to the unique characteristics of normal
map data, necessitating modifications to the loss function. We utilize a cosine
loss instead of mean squared error loss for generator training. Limited
training data availability, even with synthetic datasets, demands significant
augmentation, considering the specific nature of the input data. This includes
appropriate image flipping and in-plane rotations to accurately alter normal
vectors. Throughout training, we monitored key metrics such as average loss,
Structural Similarity Index Measure (SSIM), and Peak Signal-to-Noise Ratio
(PSNR) for the generator, along with average loss and accuracy for the
discriminator. Our findings suggest that the proposed model effectively
generates high-quality, realistic inpainted normal maps, suitable for
performance capture applications. These results establish a foundation for
future research, potentially involving more advanced networks and comparisons
with inpainting of source images used to create the normal maps.
Related papers
- GLIP: Electromagnetic Field Exposure Map Completion by Deep Generative Networks [0.6144680854063939]
We present a method to reconstruct EMF exposure maps using only the generator network in GANs.
This approach uses a prior from sensor data as Local Image Prior (LIP) captured by deep convolutional generative networks.
Experimental results show that, even when only sparse sensor data are available, our method can produce accurate estimates.
arXiv Detail & Related papers (2024-05-06T11:43:01Z) - Reconstructed Student-Teacher and Discriminative Networks for Anomaly
Detection [8.35780131268962]
A powerful anomaly detection method is proposed based on student-teacher feature pyramid matching (STPM), which consists of a student and teacher network.
To improve the accuracy of STPM, this work uses a student network, as in generative models, to reconstruct normal features.
To further improve accuracy, a discriminative network trained with pseudo-anomalies from anomaly maps is used in our method.
arXiv Detail & Related papers (2022-10-14T05:57:50Z) - Unpaired Image Super-Resolution with Optimal Transport Maps [128.1189695209663]
Real-world image super-resolution (SR) tasks often do not have paired datasets limiting the application of supervised techniques.
We propose an algorithm for unpaired SR which learns an unbiased OT map for the perceptual transport cost.
Our algorithm provides nearly state-of-the-art performance on the large-scale unpaired AIM-19 dataset.
arXiv Detail & Related papers (2022-02-02T16:21:20Z) - Low-Light Image Enhancement with Normalizing Flow [92.52290821418778]
In this paper, we investigate to model this one-to-many relationship via a proposed normalizing flow model.
An invertible network that takes the low-light images/features as the condition and learns to map the distribution of normally exposed images into a Gaussian distribution.
The experimental results on the existing benchmark datasets show our method achieves better quantitative and qualitative results, obtaining better-exposed illumination, less noise and artifact, and richer colors.
arXiv Detail & Related papers (2021-09-13T12:45:08Z) - CAMERAS: Enhanced Resolution And Sanity preserving Class Activation
Mapping for image saliency [61.40511574314069]
Backpropagation image saliency aims at explaining model predictions by estimating model-centric importance of individual pixels in the input.
We propose CAMERAS, a technique to compute high-fidelity backpropagation saliency maps without requiring any external priors.
arXiv Detail & Related papers (2021-06-20T08:20:56Z) - CutPaste: Self-Supervised Learning for Anomaly Detection and
Localization [59.719925639875036]
We propose a framework for building anomaly detectors using normal training data only.
We first learn self-supervised deep representations and then build a generative one-class classifier on learned representations.
Our empirical study on MVTec anomaly detection dataset demonstrates the proposed algorithm is general to be able to detect various types of real-world defects.
arXiv Detail & Related papers (2021-04-08T19:04:55Z) - Pixel-wise Dense Detector for Image Inpainting [34.721991959357425]
Recent GAN-based image inpainting approaches adopt an average strategy to discriminate the generated image and output a scalar.
We propose a novel detection-based generative framework for image inpainting, which adopts the min-max strategy in an adversarial process.
Experiments on multiple public datasets show the superior performance of the proposed framework.
arXiv Detail & Related papers (2020-11-04T13:45:27Z) - Unsupervised Metric Relocalization Using Transform Consistency Loss [66.19479868638925]
Training networks to perform metric relocalization traditionally requires accurate image correspondences.
We propose a self-supervised solution, which exploits a key insight: localizing a query image within a map should yield the same absolute pose, regardless of the reference image used for registration.
We evaluate our framework on synthetic and real-world data, showing our approach outperforms other supervised methods when a limited amount of ground-truth information is available.
arXiv Detail & Related papers (2020-11-01T19:24:27Z) - Iterative energy-based projection on a normal data manifold for anomaly
localization [3.785123406103385]
We propose a new approach for projecting anomalous data on a autoencoder-learned normal data manifold.
By iteratively updating the input of the autoencoder, we bypass the loss of high-frequency information caused by the autoencoder bottleneck.
arXiv Detail & Related papers (2020-02-10T13:35:41Z) - Image Fine-grained Inpainting [89.17316318927621]
We present a one-stage model that utilizes dense combinations of dilated convolutions to obtain larger and more effective receptive fields.
To better train this efficient generator, except for frequently-used VGG feature matching loss, we design a novel self-guided regression loss.
We also employ a discriminator with local and global branches to ensure local-global contents consistency.
arXiv Detail & Related papers (2020-02-07T03:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.