Human Learning by Model Feedback: The Dynamics of Iterative Prompting
with Midjourney
- URL: http://arxiv.org/abs/2311.12131v1
- Date: Mon, 20 Nov 2023 19:28:52 GMT
- Title: Human Learning by Model Feedback: The Dynamics of Iterative Prompting
with Midjourney
- Authors: Shachar Don-Yehiya and Leshem Choshen and Omri Abend
- Abstract summary: This paper analyzes the dynamics of the user prompts along such iterations.
We show that prompts predictably converge toward specific traits along these iterations.
The possibility that users adapt to the model's preference raises concerns about reusing user data for further training.
- Score: 28.39697076030535
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generating images with a Text-to-Image model often requires multiple trials,
where human users iteratively update their prompt based on feedback, namely the
output image. Taking inspiration from cognitive work on reference games and
dialogue alignment, this paper analyzes the dynamics of the user prompts along
such iterations. We compile a dataset of iterative interactions of human users
with Midjourney. Our analysis then reveals that prompts predictably converge
toward specific traits along these iterations. We further study whether this
convergence is due to human users, realizing they missed important details, or
due to adaptation to the model's ``preferences'', producing better images for a
specific language style. We show initial evidence that both possibilities are
at play. The possibility that users adapt to the model's preference raises
concerns about reusing user data for further training. The prompts may be
biased towards the preferences of a specific model, rather than align with
human intentions and natural manner of expression.
Related papers
- Learning from Naturally Occurring Feedback [25.266461597402056]
We propose a scalable method for extracting feedback that users naturally include when interacting with chat models.
We manually annotated conversation data to confirm the presence of naturally occurring feedback.
We apply our method to over 1M conversations to obtain hundreds of thousands of feedback samples.
arXiv Detail & Related papers (2024-07-15T17:41:34Z) - Personalized Language Modeling from Personalized Human Feedback [49.344833339240566]
Reinforcement Learning from Human Feedback (RLHF) is commonly used to fine-tune large language models to better align with human preferences.
In this work, we aim to address this problem by developing methods for building personalized language models.
arXiv Detail & Related papers (2024-02-06T04:18:58Z) - Bridging the Gap: A Survey on Integrating (Human) Feedback for Natural
Language Generation [68.9440575276396]
This survey aims to provide an overview of the recent research that has leveraged human feedback to improve natural language generation.
First, we introduce an encompassing formalization of feedback, and identify and organize existing research into a taxonomy following this formalization.
Second, we discuss how feedback can be described by its format and objective, and cover the two approaches proposed to use feedback (either for training or decoding): directly using the feedback or training feedback models.
Third, we provide an overview of the nascent field of AI feedback, which exploits large language models to make judgments based on a set of principles and minimize the need for
arXiv Detail & Related papers (2023-05-01T17:36:06Z) - Aligning Text-to-Image Models using Human Feedback [104.76638092169604]
Current text-to-image models often generate images that are inadequately aligned with text prompts.
We propose a fine-tuning method for aligning such models using human feedback.
Our results demonstrate the potential for learning from human feedback to significantly improve text-to-image models.
arXiv Detail & Related papers (2023-02-23T17:34:53Z) - Chain of Hindsight Aligns Language Models with Feedback [62.68665658130472]
We propose a novel technique, Chain of Hindsight, that is easy to optimize and can learn from any form of feedback, regardless of its polarity.
We convert all types of feedback into sequences of sentences, which are then used to fine-tune the model.
By doing so, the model is trained to generate outputs based on feedback, while learning to identify and correct negative attributes or errors.
arXiv Detail & Related papers (2023-02-06T10:28:16Z) - DASH: Visual Analytics for Debiasing Image Classification via
User-Driven Synthetic Data Augmentation [27.780618650580923]
Image classification models often learn to predict a class based on irrelevant co-occurrences between input features and an output class in training data.
We call the unwanted correlations "data biases," and the visual features causing data biases "bias factors"
It is challenging to identify and mitigate biases automatically without human intervention.
arXiv Detail & Related papers (2022-09-14T00:44:41Z) - Towards Building a Personalized Dialogue Generator via Implicit User
Persona Detection [0.0]
We consider high-quality transmission is essentially built based on apprehending the persona of the other party.
Motivated by this, we propose a novel personalized dialogue generator by detecting implicit user persona.
arXiv Detail & Related papers (2022-04-15T08:12:10Z) - Dialogue Response Ranking Training with Large-Scale Human Feedback Data [52.12342165926226]
We leverage social media feedback data to build a large-scale training dataset for feedback prediction.
We trained DialogRPT, a set of GPT-2 based models on 133M pairs of human feedback data.
Our ranker outperforms the conventional dialog perplexity baseline with a large margin on predicting Reddit feedback.
arXiv Detail & Related papers (2020-09-15T10:50:05Z) - Topic Adaptation and Prototype Encoding for Few-Shot Visual Storytelling [81.33107307509718]
We propose a topic adaptive storyteller to model the ability of inter-topic generalization.
We also propose a prototype encoding structure to model the ability of intra-topic derivation.
Experimental results show that topic adaptation and prototype encoding structure mutually bring benefit to the few-shot model.
arXiv Detail & Related papers (2020-08-11T03:55:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.