Prompt Smells: An Omen for Undesirable Generative AI Outputs
- URL: http://arxiv.org/abs/2401.12611v1
- Date: Tue, 23 Jan 2024 10:10:01 GMT
- Title: Prompt Smells: An Omen for Undesirable Generative AI Outputs
- Authors: Krishna Ronanki, Beatriz Cabrero-Daniel, Christian Berger
- Abstract summary: We propose two new concepts that will aid the research community in addressing limitations associated with the application of GenAI models.
First, we propose a definition for the "desirability" of GenAI outputs and three factors which are observed to influence it.
Second, drawing inspiration from Martin Fowler's code smells, we propose the concept of "prompt smells" and the adverse effects they are observed to have on the desirability of GenAI outputs.
- Score: 4.105236597768038
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recent Generative Artificial Intelligence (GenAI) trends focus on various
applications, including creating stories, illustrations, poems, articles,
computer code, music compositions, and videos. Extrinsic hallucinations are a
critical limitation of such GenAI, which can lead to significant challenges in
achieving and maintaining the trustworthiness of GenAI. In this paper, we
propose two new concepts that we believe will aid the research community in
addressing limitations associated with the application of GenAI models. First,
we propose a definition for the "desirability" of GenAI outputs and three
factors which are observed to influence it. Second, drawing inspiration from
Martin Fowler's code smells, we propose the concept of "prompt smells" and the
adverse effects they are observed to have on the desirability of GenAI outputs.
We expect our work will contribute to the ongoing conversation about the
desirability of GenAI outputs and help advance the field in a meaningful way.
Related papers
- "I Am the One and Only, Your Cyber BFF": Understanding the Impact of GenAI Requires Understanding the Impact of Anthropomorphic AI [55.99010491370177]
We argue that we cannot thoroughly map the social impacts of generative AI without mapping the social impacts of anthropomorphic AI.
anthropomorphic AI systems are increasingly prone to generating outputs that are perceived to be human-like.
arXiv Detail & Related papers (2024-10-11T04:57:41Z) - Generative artificial intelligence in dentistry: Current approaches and future challenges [0.0]
generative AI (GenAI) models bridge the usability gap of AI by providing a natural language interface to interact with complex models.
In dental education, the student now has the opportunity to solve a plethora of questions by only prompting a GenAI model.
GenAI can also be used in dental research, where the applications range from new drug discovery to assistance in academic writing.
arXiv Detail & Related papers (2024-07-24T03:33:47Z) - Model-based Maintenance and Evolution with GenAI: A Look into the Future [47.93555901495955]
We argue that Generative Artificial Intelligence (GenAI) can be used as a means to address the limitations of Model-Based Engineering (MBM&E)
We propose that GenAI can be used in MBM&E for: reducing engineers' learning curve, maximizing efficiency with recommendations, or serving as a reasoning tool to understand domain problems.
arXiv Detail & Related papers (2024-07-09T23:13:26Z) - The Influencer Next Door: How Misinformation Creators Use GenAI [1.1650821883155187]
We find that non-experts increasingly use GenAI to remix, repackage, and (re)produce content to meet their personal needs and desires.
We analyze how these understudied emergent uses of GenAI produce new or accelerated misinformation harms.
arXiv Detail & Related papers (2024-05-22T11:40:22Z) - Explainable Generative AI (GenXAI): A Survey, Conceptualization, and Research Agenda [1.8592384822257952]
We elaborate on why XAI has gained importance with the rise of GenAI and its challenges for explainability research.
We also unveil novel and emerging desiderata that explanations should fulfill, covering aspects such as verifiability, interactivity, security, and cost.
arXiv Detail & Related papers (2024-04-15T08:18:16Z) - Exploration with Principles for Diverse AI Supervision [88.61687950039662]
Training large transformers using next-token prediction has given rise to groundbreaking advancements in AI.
While this generative AI approach has produced impressive results, it heavily leans on human supervision.
This strong reliance on human oversight poses a significant hurdle to the advancement of AI innovation.
We propose a novel paradigm termed Exploratory AI (EAI) aimed at autonomously generating high-quality training data.
arXiv Detail & Related papers (2023-10-13T07:03:39Z) - Identifying and Mitigating the Security Risks of Generative AI [179.2384121957896]
This paper reports the findings of a workshop held at Google on the dual-use dilemma posed by GenAI.
GenAI can be used just as well by attackers to generate new attacks and increase the velocity and efficacy of existing attacks.
We discuss short-term and long-term goals for the community on this topic.
arXiv Detail & Related papers (2023-08-28T18:51:09Z) - Innovating Computer Programming Pedagogy: The AI-Lab Framework for
Generative AI Adoption [0.0]
We introduce "AI-Lab," a framework for guiding students in effectively leveraging GenAI within core programming courses.
By identifying and rectifying GenAI's errors, students enrich their learning process.
For educators, AI-Lab provides mechanisms to explore students' perceptions of GenAI's role in their learning experience.
arXiv Detail & Related papers (2023-08-23T17:20:37Z) - A Comprehensive Survey of AI-Generated Content (AIGC): A History of
Generative AI from GAN to ChatGPT [63.58711128819828]
ChatGPT and other Generative AI (GAI) techniques belong to the category of Artificial Intelligence Generated Content (AIGC)
The goal of AIGC is to make the content creation process more efficient and accessible, allowing for the production of high-quality content at a faster pace.
arXiv Detail & Related papers (2023-03-07T20:36:13Z) - Investigating Explainability of Generative AI for Code through
Scenario-based Design [44.44517254181818]
generative AI (GenAI) technologies are maturing and being applied to application domains such as software engineering.
We conduct 9 workshops with 43 software engineers in which real examples from state-of-the-art generative AI models were used to elicit users' explainability needs.
Our work explores explainability needs for GenAI for code and demonstrates how human-centered approaches can drive the technical development of XAI in novel domains.
arXiv Detail & Related papers (2022-02-10T08:52:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.