FaceCook: Face Generation Based on Linear Scaling Factors
- URL: http://arxiv.org/abs/2109.03492v1
- Date: Wed, 8 Sep 2021 08:31:40 GMT
- Title: FaceCook: Face Generation Based on Linear Scaling Factors
- Authors: Tianren Wang, Can Peng, Teng Zhang, Brian Lovell
- Abstract summary: We propose a new approach to mapping the latent vectors of the generative model to the scaling factors.
The proposed method outperforms the baseline in terms of image diversity.
- Score: 11.682904465909003
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the excellent disentanglement properties of state-of-the-art generative
models, image editing has been the dominant approach to control the attributes
of synthesised face images. However, these edited results often suffer from
artifacts or incorrect feature rendering, especially when there is a large
discrepancy between the image to be edited and the desired feature set.
Therefore, we propose a new approach to mapping the latent vectors of the
generative model to the scaling factors through solving a set of multivariate
linear equations. The coefficients of the equations are the eigenvectors of the
weight parameters of the pre-trained model, which form the basis of a hyper
coordinate system. The qualitative and quantitative results both show that the
proposed method outperforms the baseline in terms of image diversity. In
addition, the method is much more time-efficient because you can obtain
synthesised images with desirable features directly from the latent vectors,
rather than the former process of editing randomly generated images requiring
many processing steps.
Related papers
- Fast constrained sampling in pre-trained diffusion models [77.21486516041391]
Diffusion models have dominated the field of large, generative image models.
We propose an algorithm for fast-constrained sampling in large pre-trained diffusion models.
arXiv Detail & Related papers (2024-10-24T14:52:38Z) - A Simple Approach to Unifying Diffusion-based Conditional Generation [63.389616350290595]
We introduce a simple, unified framework to handle diverse conditional generation tasks.
Our approach enables versatile capabilities via different inference-time sampling schemes.
Our model supports additional capabilities like non-spatially aligned and coarse conditioning.
arXiv Detail & Related papers (2024-10-15T09:41:43Z) - Optimize and Reduce: A Top-Down Approach for Image Vectorization [12.998637003026273]
We propose Optimize & Reduce (O&R), a top-down approach to vectorization that is both fast and domain-agnostic.
O&R aims to attain a compact representation of input images by iteratively optimizing B'ezier curve parameters.
We demonstrate that our method is domain agnostic and outperforms existing works in both reconstruction and perceptual quality for a fixed number of shapes.
arXiv Detail & Related papers (2023-12-18T16:41:03Z) - Auto-regressive Image Synthesis with Integrated Quantization [55.51231796778219]
This paper presents a versatile framework for conditional image generation.
It incorporates the inductive bias of CNNs and powerful sequence modeling of auto-regression.
Our method achieves superior diverse image generation performance as compared with the state-of-the-art.
arXiv Detail & Related papers (2022-07-21T22:19:17Z) - NeurInt : Learning to Interpolate through Neural ODEs [18.104328632453676]
We propose a novel generative model that learns a distribution of trajectories between two images.
We demonstrate our approach's effectiveness in generating images improved quality as well as its ability to learn a diverse distribution over smooth trajectories for any pair of real source and target images.
arXiv Detail & Related papers (2021-11-07T16:31:18Z) - IMAGINE: Image Synthesis by Image-Guided Model Inversion [79.4691654458141]
We introduce an inversion based method, denoted as IMAge-Guided model INvErsion (IMAGINE), to generate high-quality and diverse images.
We leverage the knowledge of image semantics from a pre-trained classifier to achieve plausible generations.
IMAGINE enables the synthesis procedure to simultaneously 1) enforce semantic specificity constraints during the synthesis, 2) produce realistic images without generator training, and 3) give users intuitive control over the generation process.
arXiv Detail & Related papers (2021-04-13T02:00:24Z) - High Resolution Face Editing with Masked GAN Latent Code Optimization [0.0]
Face editing is a popular research topic in the computer vision community.
Recent proposed methods are based on either training a conditional encoder-decoder Generative Adversarial Network (GAN) in an end-to-end fashion or on defining an operation in the latent space of a pre-trained vanilla GAN generator model.
We propose a GAN embedding optimization procedure with spatial and semantic constraints.
arXiv Detail & Related papers (2021-03-20T08:39:41Z) - Regularization via deep generative models: an analysis point of view [8.818465117061205]
This paper proposes a new way of regularizing an inverse problem in imaging (e.g., deblurring or inpainting) by means of a deep generative neural network.
In many cases our technique achieves a clear improvement of the performance and seems to be more robust.
arXiv Detail & Related papers (2021-01-21T15:04:57Z) - Joint Estimation of Image Representations and their Lie Invariants [57.3768308075675]
Images encode both the state of the world and its content.
The automatic extraction of this information is challenging because of the high-dimensionality and entangled encoding inherent to the image representation.
This article introduces two theoretical approaches aimed at the resolution of these challenges.
arXiv Detail & Related papers (2020-12-05T00:07:41Z) - Deep Variational Network Toward Blind Image Restoration [60.45350399661175]
Blind image restoration is a common yet challenging problem in computer vision.
We propose a novel blind image restoration method, aiming to integrate both the advantages of them.
Experiments on two typical blind IR tasks, namely image denoising and super-resolution, demonstrate that the proposed method achieves superior performance over current state-of-the-arts.
arXiv Detail & Related papers (2020-08-25T03:30:53Z) - Image Restoration from Parametric Transformations using Generative
Models [4.467248776406006]
We develop optimum techniques for various image restoration problems using generative models.
Our approach is capable of restoring images that are distorted by transformations even when the latter contain unknown parameters.
We extend our method to accommodate mixtures of multiple images where each image is described by its own generative model.
arXiv Detail & Related papers (2020-05-27T01:14:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.