Happy People -- Image Synthesis as Black-Box Optimization Problem in the
Discrete Latent Space of Deep Generative Models
- URL: http://arxiv.org/abs/2306.06684v1
- Date: Sun, 11 Jun 2023 13:58:36 GMT
- Title: Happy People -- Image Synthesis as Black-Box Optimization Problem in the
Discrete Latent Space of Deep Generative Models
- Authors: Steffen Jung, Jan Christian Schwedhelm, Claudia Schillings, Margret
Keuper
- Abstract summary: We propose a novel image generative approach that optimize the generated sample with respect to a continuously quantifiable property.
Specifically, we propose to use tree-based ensemble models as mathematical programs over the discrete latent space of vector quantized VAEs.
- Score: 10.533348468499826
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, optimization in the learned latent space of deep generative
models has been successfully applied to black-box optimization problems such as
drug design, image generation or neural architecture search. Existing models
thereby leverage the ability of neural models to learn the data distribution
from a limited amount of samples such that new samples from the distribution
can be drawn. In this work, we propose a novel image generative approach that
optimizes the generated sample with respect to a continuously quantifiable
property. While we anticipate absolutely no practically meaningful application
for the proposed framework, it is theoretically principled and allows to
quickly propose samples at the mere boundary of the training data distribution.
Specifically, we propose to use tree-based ensemble models as mathematical
programs over the discrete latent space of vector quantized VAEs, which can be
globally solved. Subsequent weighted retraining on these queries allows to
induce a distribution shift. In lack of a practically relevant problem, we
consider a visually appealing application: the generation of happily smiling
faces (where the training distribution only contains less happy people) - and
show the principled behavior of our approach in terms of improved FID and
higher smile degree over baseline approaches.
Related papers
- Bayesian Inverse Graphics for Few-Shot Concept Learning [3.475273727432576]
We present a Bayesian model of perception that learns using only minimal data.
We show how this representation can be used for downstream tasks such as few-shot classification and estimation.
arXiv Detail & Related papers (2024-09-12T18:30:41Z) - A Diffusion Model Framework for Unsupervised Neural Combinatorial Optimization [7.378582040635655]
Current deep learning approaches rely on generative models that yield exact sample likelihoods.
This work introduces a method that lifts this restriction and opens the possibility to employ highly expressive latent variable models.
We experimentally validate our approach in data-free Combinatorial Optimization and demonstrate that our method achieves a new state-of-the-art on a wide range of benchmark problems.
arXiv Detail & Related papers (2024-06-03T17:55:02Z) - Bridging Model-Based Optimization and Generative Modeling via Conservative Fine-Tuning of Diffusion Models [54.132297393662654]
We introduce a hybrid method that fine-tunes cutting-edge diffusion models by optimizing reward models through RL.
We demonstrate the capability of our approach to outperform the best designs in offline data, leveraging the extrapolation capabilities of reward models.
arXiv Detail & Related papers (2024-05-30T03:57:29Z) - Space-Variant Total Variation boosted by learning techniques in few-view tomographic imaging [0.0]
This paper focuses on the development of a space-variant regularization model for solving an under-determined linear inverse problem.
The primary objective of the proposed model is to achieve a good balance between denoising and the preservation of fine details and edges.
A convolutional neural network is designed, to approximate both the ground truth image and its gradient using an elastic loss function in its training.
arXiv Detail & Related papers (2024-04-25T08:58:41Z) - The Convex Landscape of Neural Networks: Characterizing Global Optima
and Stationary Points via Lasso Models [75.33431791218302]
Deep Neural Network Network (DNN) models are used for programming purposes.
In this paper we examine the use of convex neural recovery models.
We show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
We also show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
arXiv Detail & Related papers (2023-12-19T23:04:56Z) - Robust Model-Based Optimization for Challenging Fitness Landscapes [96.63655543085258]
Protein design involves optimization on a fitness landscape.
Leading methods are challenged by sparsity of high-fitness samples in the training set.
We show that this problem of "separation" in the design space is a significant bottleneck in existing model-based optimization tools.
We propose a new approach that uses a novel VAE as its search model to overcome the problem.
arXiv Detail & Related papers (2023-05-23T03:47:32Z) - Score-Based Generative Modeling through Stochastic Differential
Equations [114.39209003111723]
We present a differential equation that transforms a complex data distribution to a known prior distribution by injecting noise.
A corresponding reverse-time SDE transforms the prior distribution back into the data distribution by slowly removing the noise.
By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks.
We demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model.
arXiv Detail & Related papers (2020-11-26T19:39:10Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z) - Instance Selection for GANs [25.196177369030146]
Generative Adversarial Networks (GANs) have led to their widespread adoption for the purposes of generating high quality synthetic imagery.
GANs often produce unrealistic samples which fall outside of the data manifold.
We propose a novel approach to improve sample quality: altering the training dataset via instance selection before model training has taken place.
arXiv Detail & Related papers (2020-07-30T06:33:51Z) - Improving Maximum Likelihood Training for Text Generation with Density
Ratio Estimation [51.091890311312085]
We propose a new training scheme for auto-regressive sequence generative models, which is effective and stable when operating at large sample space encountered in text generation.
Our method stably outperforms Maximum Likelihood Estimation and other state-of-the-art sequence generative models in terms of both quality and diversity.
arXiv Detail & Related papers (2020-07-12T15:31:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.