Prompt Evolution for Generative AI: A Classifier-Guided Approach
- URL: http://arxiv.org/abs/2305.16347v1
- Date: Wed, 24 May 2023 14:48:18 GMT
- Title: Prompt Evolution for Generative AI: A Classifier-Guided Approach
- Authors: Melvin Wong, Yew-Soon Ong, Abhishek Gupta, Kavitesh K. Bali, Caishun
Chen
- Abstract summary: This paper conceptualizes prompt evolution, evolutionary selection pressure and variation during the generative process to produce better images.
A novelty of our evolutionary algorithm is that the pre-trained generative model gives us implicit mutation operations.
- Score: 18.500689885854694
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Synthesis of digital artifacts conditioned on user prompts has become an
important paradigm facilitating an explosion of use cases with generative AI.
However, such models often fail to connect the generated outputs and desired
target concepts/preferences implied by the prompts. Current research addressing
this limitation has largely focused on enhancing the prompts before output
generation or improving the model's performance up front. In contrast, this
paper conceptualizes prompt evolution, imparting evolutionary selection
pressure and variation during the generative process to produce multiple
outputs that satisfy the target concepts/preferences better. We propose a
multi-objective instantiation of this broader idea that uses a multi-label
image classifier-guided approach. The predicted labels from the classifiers
serve as multiple objectives to optimize, with the aim of producing diversified
images that meet user preferences. A novelty of our evolutionary algorithm is
that the pre-trained generative model gives us implicit mutation operations,
leveraging the model's stochastic generative capability to automate the
creation of Pareto-optimized images more faithful to user preferences.
Related papers
- Optimizing Multi-Round Enhanced Training in Diffusion Models for Improved Preference Understanding [29.191627597682597]
We present a framework incorporating human-in-the-loop feedback, leveraging a well-trained reward model aligned with user preferences.
Our approach consistently surpasses competing models in user satisfaction, especially in multi-turn dialogue scenarios.
arXiv Detail & Related papers (2025-04-25T09:35:02Z) - Preference-Guided Diffusion for Multi-Objective Offline Optimization [64.08326521234228]
We propose a preference-guided diffusion model for offline multi-objective optimization.
Our guidance is a preference model trained to predict the probability that one design dominates another.
Our results highlight the effectiveness of classifier-guided diffusion models in generating diverse and high-quality solutions.
arXiv Detail & Related papers (2025-03-21T16:49:38Z) - Personalized Preference Fine-tuning of Diffusion Models [75.22218338096316]
We introduce PPD, a multi-reward optimization objective that aligns diffusion models with personalized preferences.
With PPD, a diffusion model learns the individual preferences of a population of users in a few-shot way.
Our approach achieves an average win rate of 76% over Stable Cascade, generating images that more accurately reflect specific user preferences.
arXiv Detail & Related papers (2025-01-11T22:38:41Z) - Generative Diffusion Models for Sequential Recommendations [7.948486055890262]
Generative models such as Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs) have shown promise in sequential recommendation tasks.
This research introduces enhancements to the DiffuRec architecture to improve robustness and incorporates a cross-attention mechanism in the Approximator to better capture relevant user-item interactions.
arXiv Detail & Related papers (2024-10-25T09:39:05Z) - Multimodal Large Language Model is a Human-Aligned Annotator for Text-to-Image Generation [87.50120181861362]
VisionPrefer is a high-quality and fine-grained preference dataset that captures multiple preference aspects.
We train a reward model VP-Score over VisionPrefer to guide the training of text-to-image generative models and the preference prediction accuracy of VP-Score is comparable to human annotators.
arXiv Detail & Related papers (2024-04-23T14:53:15Z) - Diverse and Tailored Image Generation for Zero-shot Multi-label Classification [3.354528906571718]
zero-shot multi-label classification has garnered considerable attention for its capacity to operate predictions on unseen labels without human annotations.
prevailing approaches often use seen classes as imperfect proxies for unseen ones, resulting in suboptimal performance.
We propose an innovative solution: generating synthetic data to construct a training set explicitly tailored for proxyless training on unseen labels.
arXiv Detail & Related papers (2024-04-04T01:34:36Z) - Refine, Discriminate and Align: Stealing Encoders via Sample-Wise Prototypes and Multi-Relational Extraction [57.16121098944589]
RDA is a pioneering approach designed to address two primary deficiencies prevalent in previous endeavors aiming at stealing pre-trained encoders.
It is accomplished via a sample-wise prototype, which consolidates the target encoder's representations for a given sample's various perspectives.
For more potent efficacy, we develop a multi-relational extraction loss that trains the surrogate encoder to Discriminate mismatched embedding-prototype pairs.
arXiv Detail & Related papers (2023-12-01T15:03:29Z) - Fast Adaptation with Bradley-Terry Preference Models in Text-To-Image
Classification and Generation [0.0]
We leverage the Bradley-Terry preference model to develop a fast adaptation method that efficiently fine-tunes the original model.
Extensive evidence of the capabilities of this framework is provided through experiments in different domains related to multimodal text and image understanding.
arXiv Detail & Related papers (2023-07-15T07:53:12Z) - Precision-Recall Divergence Optimization for Generative Modeling with
GANs and Normalizing Flows [54.050498411883495]
We develop a novel training method for generative models, such as Generative Adversarial Networks and Normalizing Flows.
We show that achieving a specified precision-recall trade-off corresponds to minimizing a unique $f$-divergence from a family we call the textitPR-divergences.
Our approach improves the performance of existing state-of-the-art models like BigGAN in terms of either precision or recall when tested on datasets such as ImageNet.
arXiv Detail & Related papers (2023-05-30T10:07:17Z) - Auto-regressive Image Synthesis with Integrated Quantization [55.51231796778219]
This paper presents a versatile framework for conditional image generation.
It incorporates the inductive bias of CNNs and powerful sequence modeling of auto-regression.
Our method achieves superior diverse image generation performance as compared with the state-of-the-art.
arXiv Detail & Related papers (2022-07-21T22:19:17Z) - A Generic Approach for Enhancing GANs by Regularized Latent Optimization [79.00740660219256]
We introduce a generic framework called em generative-model inference that is capable of enhancing pre-trained GANs effectively and seamlessly.
Our basic idea is to efficiently infer the optimal latent distribution for the given requirements using Wasserstein gradient flow techniques.
arXiv Detail & Related papers (2021-12-07T05:22:50Z) - BIGRoC: Boosting Image Generation via a Robust Classifier [27.66648389933265]
We propose a general model-agnostic technique for improving the image quality and the distribution fidelity of generated images.
Our method, termed BIGRoC, is based on a post-processing procedure via the guidance of a given robust classifier.
arXiv Detail & Related papers (2021-08-08T18:05:44Z) - Adversarial and Contrastive Variational Autoencoder for Sequential
Recommendation [25.37244686572865]
We propose a novel method called Adversarial and Contrastive Variational Autoencoder (ACVAE) for sequential recommendation.
We first introduce the adversarial training for sequence generation under the Adversarial Variational Bayes framework, which enables our model to generate high-quality latent variables.
Besides, when encoding the sequence, we apply a recurrent and convolutional structure to capture global and local relationships in the sequence.
arXiv Detail & Related papers (2021-03-19T09:01:14Z) - Improve Variational Autoencoder for Text Generationwith Discrete Latent
Bottleneck [52.08901549360262]
Variational autoencoders (VAEs) are essential tools in end-to-end representation learning.
VAEs tend to ignore latent variables with a strong auto-regressive decoder.
We propose a principled approach to enforce an implicit latent feature matching in a more compact latent space.
arXiv Detail & Related papers (2020-04-22T14:41:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.