A Survey on Responsible Generative AI: What to Generate and What Not
- URL: http://arxiv.org/abs/2404.05783v2
- Date: Tue, 3 Sep 2024 16:23:55 GMT
- Title: A Survey on Responsible Generative AI: What to Generate and What Not
- Authors: Jindong Gu,
- Abstract summary: This paper investigates the practical responsible requirements of both textual and visual generative models.
We outline five key considerations: generating truthful content, avoiding toxic content, refusing harmful instruction, leaking no training data-related content, and ensuring generated content identifiable.
- Score: 15.903523057779651
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, generative AI (GenAI), like large language models and text-to-image models, has received significant attention across various domains. However, ensuring the responsible generation of content by these models is crucial for their real-world applicability. This raises an interesting question: What should responsible GenAI generate, and what should it not? To answer the question, this paper investigates the practical responsible requirements of both textual and visual generative models, outlining five key considerations: generating truthful content, avoiding toxic content, refusing harmful instruction, leaking no training data-related content, and ensuring generated content identifiable. Specifically, we review recent advancements and challenges in addressing these requirements. Besides, we discuss and emphasize the importance of responsible GenAI across healthcare, education, finance, and artificial general intelligence domains. Through a unified perspective on both textual and visual generative models, this paper aims to provide insights into practical safety-related issues and further benefit the community in building responsible GenAI.
Related papers
- Computational Safety for Generative AI: A Signal Processing Perspective [65.268245109828]
computational safety is a mathematical framework that enables the quantitative assessment, formulation, and study of safety challenges in GenAI.
We show how sensitivity analysis and loss landscape analysis can be used to detect malicious prompts with jailbreak attempts.
We discuss key open research challenges, opportunities, and the essential role of signal processing in computational AI safety.
arXiv Detail & Related papers (2025-02-18T02:26:50Z) - "So what if I used GenAI?" -- Implications of Using Cloud-based GenAI in Software Engineering Research [0.0]
This paper sheds light on the various research aspects in which GenAI is used, thus raising awareness of its legal implications to novice and budding researchers.
We summarize key aspects regarding our current knowledge that every software researcher involved in using GenAI should be aware of to avoid critical mistakes that may expose them to liability claims.
arXiv Detail & Related papers (2024-12-10T06:18:15Z) - Exploring AI Text Generation, Retrieval-Augmented Generation, and Detection Technologies: a Comprehensive Overview [0.0]
Concerns surrounding AI-generated content, including issues of originality, bias, misinformation, and accountability, have become prominent.
This paper offers a comprehensive overview of AI text generators (AITGs), focusing on their evolution, capabilities, and ethical implications.
The paper explores future directions for improving detection accuracy, supporting ethical AI development, and increasing accessibility.
arXiv Detail & Related papers (2024-12-05T07:23:14Z) - SoK: Watermarking for AI-Generated Content [112.9218881276487]
Watermarking schemes embed hidden signals within AI-generated content to enable reliable detection.
Watermarks can play a crucial role in enhancing AI safety and trustworthiness by combating misinformation and deception.
This work aims to guide researchers in advancing watermarking methods and applications, and support policymakers in addressing the broader implications of GenAI.
arXiv Detail & Related papers (2024-11-27T16:22:33Z) - Generative Artificial Intelligence Meets Synthetic Aperture Radar: A Survey [49.29751866761522]
This paper aims to investigate the intersection of GenAI and SAR.
First, we illustrate the common data generation-based applications in SAR field.
Then, an overview of the latest GenAI models is systematically reviewed.
Finally, the corresponding applications in SAR domain are also included.
arXiv Detail & Related papers (2024-11-05T03:06:00Z) - Legal Aspects for Software Developers Interested in Generative AI Applications [5.772982243103395]
Generative Artificial Intelligence (GenAI) has led to new technologies capable of generating high-quality code, natural language, and images.
The next step is to integrate GenAI technology into products, a task typically conducted by software developers.
This article sheds light on the current state of two such risks: data protection and copyright.
arXiv Detail & Related papers (2024-04-25T14:17:34Z) - Prompt Smells: An Omen for Undesirable Generative AI Outputs [4.105236597768038]
We propose two new concepts that will aid the research community in addressing limitations associated with the application of GenAI models.
First, we propose a definition for the "desirability" of GenAI outputs and three factors which are observed to influence it.
Second, drawing inspiration from Martin Fowler's code smells, we propose the concept of "prompt smells" and the adverse effects they are observed to have on the desirability of GenAI outputs.
arXiv Detail & Related papers (2024-01-23T10:10:01Z) - Identifying and Mitigating the Security Risks of Generative AI [179.2384121957896]
This paper reports the findings of a workshop held at Google on the dual-use dilemma posed by GenAI.
GenAI can be used just as well by attackers to generate new attacks and increase the velocity and efficacy of existing attacks.
We discuss short-term and long-term goals for the community on this topic.
arXiv Detail & Related papers (2023-08-28T18:51:09Z) - DeepfakeArt Challenge: A Benchmark Dataset for Generative AI Art Forgery and Data Poisoning Detection [57.51313366337142]
There has been growing concern over the use of generative AI for malicious purposes.
In the realm of visual content synthesis using generative AI, key areas of significant concern has been image forgery and data poisoning.
We introduce the DeepfakeArt Challenge, a large-scale challenge benchmark dataset designed specifically to aid in the building of machine learning algorithms for generative AI art forgery and data poisoning detection.
arXiv Detail & Related papers (2023-06-02T05:11:27Z) - A Comprehensive Survey of AI-Generated Content (AIGC): A History of
Generative AI from GAN to ChatGPT [63.58711128819828]
ChatGPT and other Generative AI (GAI) techniques belong to the category of Artificial Intelligence Generated Content (AIGC)
The goal of AIGC is to make the content creation process more efficient and accessible, allowing for the production of high-quality content at a faster pace.
arXiv Detail & Related papers (2023-03-07T20:36:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.