A Survey on Responsible Generative AI: What to Generate and What Not
- URL: http://arxiv.org/abs/2404.05783v2
- Date: Tue, 3 Sep 2024 16:23:55 GMT
- Title: A Survey on Responsible Generative AI: What to Generate and What Not
- Authors: Jindong Gu,
- Abstract summary: This paper investigates the practical responsible requirements of both textual and visual generative models.
We outline five key considerations: generating truthful content, avoiding toxic content, refusing harmful instruction, leaking no training data-related content, and ensuring generated content identifiable.
- Score: 15.903523057779651
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, generative AI (GenAI), like large language models and text-to-image models, has received significant attention across various domains. However, ensuring the responsible generation of content by these models is crucial for their real-world applicability. This raises an interesting question: What should responsible GenAI generate, and what should it not? To answer the question, this paper investigates the practical responsible requirements of both textual and visual generative models, outlining five key considerations: generating truthful content, avoiding toxic content, refusing harmful instruction, leaking no training data-related content, and ensuring generated content identifiable. Specifically, we review recent advancements and challenges in addressing these requirements. Besides, we discuss and emphasize the importance of responsible GenAI across healthcare, education, finance, and artificial general intelligence domains. Through a unified perspective on both textual and visual generative models, this paper aims to provide insights into practical safety-related issues and further benefit the community in building responsible GenAI.
Related papers
- Information Retrieval in the Age of Generative AI: The RGB Model [77.96475639967431]
This paper presents a novel quantitative approach to shed light on the complex information dynamics arising from the growing use of generative AI tools.
We propose a model to characterize the generation, indexing, and dissemination of information in response to new topics.
Our findings suggest that the rapid pace of generative AI adoption, combined with increasing user reliance, can outpace human verification, escalating the risk of inaccurate information proliferation.
arXiv Detail & Related papers (2025-04-29T10:21:40Z) - Experiences with Content Development and Assessment Design in the Era of GenAI [0.032771631221674334]
The advancement in GenAI has revolutionised several aspects of education, especially subject and assessment design.
The paper intends to determine how effectively GenAI can design a subject, including lectures, labs and assessments, using prompts and custom-based training.
arXiv Detail & Related papers (2025-02-28T05:05:15Z) - Computational Safety for Generative AI: A Signal Processing Perspective [65.268245109828]
computational safety is a mathematical framework that enables the quantitative assessment, formulation, and study of safety challenges in GenAI.
We show how sensitivity analysis and loss landscape analysis can be used to detect malicious prompts with jailbreak attempts.
We discuss key open research challenges, opportunities, and the essential role of signal processing in computational AI safety.
arXiv Detail & Related papers (2025-02-18T02:26:50Z) - "So what if I used GenAI?" -- Implications of Using Cloud-based GenAI in Software Engineering Research [0.0]
This paper sheds light on the various research aspects in which GenAI is used, thus raising awareness of its legal implications to novice and budding researchers.
We summarize key aspects regarding our current knowledge that every software researcher involved in using GenAI should be aware of to avoid critical mistakes that may expose them to liability claims.
arXiv Detail & Related papers (2024-12-10T06:18:15Z) - Exploring AI Text Generation, Retrieval-Augmented Generation, and Detection Technologies: a Comprehensive Overview [0.0]
Concerns surrounding AI-generated content, including issues of originality, bias, misinformation, and accountability, have become prominent.
This paper offers a comprehensive overview of AI text generators (AITGs), focusing on their evolution, capabilities, and ethical implications.
The paper explores future directions for improving detection accuracy, supporting ethical AI development, and increasing accessibility.
arXiv Detail & Related papers (2024-12-05T07:23:14Z) - SoK: Watermarking for AI-Generated Content [112.9218881276487]
Watermarking schemes embed hidden signals within AI-generated content to enable reliable detection.
Watermarks can play a crucial role in enhancing AI safety and trustworthiness by combating misinformation and deception.
This work aims to guide researchers in advancing watermarking methods and applications, and support policymakers in addressing the broader implications of GenAI.
arXiv Detail & Related papers (2024-11-27T16:22:33Z) - Generative Artificial Intelligence Meets Synthetic Aperture Radar: A Survey [49.29751866761522]
This paper aims to investigate the intersection of GenAI and SAR.
First, we illustrate the common data generation-based applications in SAR field.
Then, an overview of the latest GenAI models is systematically reviewed.
Finally, the corresponding applications in SAR domain are also included.
arXiv Detail & Related papers (2024-11-05T03:06:00Z) - Legal Aspects for Software Developers Interested in Generative AI Applications [5.772982243103395]
Generative Artificial Intelligence (GenAI) has led to new technologies capable of generating high-quality code, natural language, and images.
The next step is to integrate GenAI technology into products, a task typically conducted by software developers.
This article sheds light on the current state of two such risks: data protection and copyright.
arXiv Detail & Related papers (2024-04-25T14:17:34Z) - Explainable Generative AI (GenXAI): A Survey, Conceptualization, and Research Agenda [1.8592384822257952]
We elaborate on why XAI has gained importance with the rise of GenAI and its challenges for explainability research.
We also unveil novel and emerging desiderata that explanations should fulfill, covering aspects such as verifiability, interactivity, security, and cost.
arXiv Detail & Related papers (2024-04-15T08:18:16Z) - Prompt Smells: An Omen for Undesirable Generative AI Outputs [4.105236597768038]
We propose two new concepts that will aid the research community in addressing limitations associated with the application of GenAI models.
First, we propose a definition for the "desirability" of GenAI outputs and three factors which are observed to influence it.
Second, drawing inspiration from Martin Fowler's code smells, we propose the concept of "prompt smells" and the adverse effects they are observed to have on the desirability of GenAI outputs.
arXiv Detail & Related papers (2024-01-23T10:10:01Z) - Identifying and Mitigating the Security Risks of Generative AI [179.2384121957896]
This paper reports the findings of a workshop held at Google on the dual-use dilemma posed by GenAI.
GenAI can be used just as well by attackers to generate new attacks and increase the velocity and efficacy of existing attacks.
We discuss short-term and long-term goals for the community on this topic.
arXiv Detail & Related papers (2023-08-28T18:51:09Z) - VerifAI: Verified Generative AI [22.14231506649365]
Generative AI has made significant strides, yet concerns about its accuracy and reliability continue to grow.
We propose that verifying the outputs of generative AI from a data management perspective is an emerging issue for generative AI.
Our vision is to promote the development of verifiable generative AI and contribute to a more trustworthy and responsible use of AI.
arXiv Detail & Related papers (2023-07-06T06:11:51Z) - DeepfakeArt Challenge: A Benchmark Dataset for Generative AI Art Forgery and Data Poisoning Detection [57.51313366337142]
There has been growing concern over the use of generative AI for malicious purposes.
In the realm of visual content synthesis using generative AI, key areas of significant concern has been image forgery and data poisoning.
We introduce the DeepfakeArt Challenge, a large-scale challenge benchmark dataset designed specifically to aid in the building of machine learning algorithms for generative AI art forgery and data poisoning detection.
arXiv Detail & Related papers (2023-06-02T05:11:27Z) - A Comprehensive Survey of AI-Generated Content (AIGC): A History of
Generative AI from GAN to ChatGPT [63.58711128819828]
ChatGPT and other Generative AI (GAI) techniques belong to the category of Artificial Intelligence Generated Content (AIGC)
The goal of AIGC is to make the content creation process more efficient and accessible, allowing for the production of high-quality content at a faster pace.
arXiv Detail & Related papers (2023-03-07T20:36:13Z) - A Pathway Towards Responsible AI Generated Content [68.13835802977125]
We focus on 8 main concerns that may hinder the healthy development and deployment of AIGC in practice.
These concerns include risks from (1) privacy; (2) bias, toxicity, misinformation; (3) intellectual property (IP); (4) robustness; (5) open source and explanation; (6) technology abuse; (7) consent, credit, and compensation; (8) environment.
arXiv Detail & Related papers (2023-03-02T14:58:40Z) - A Survey of Knowledge-Enhanced Text Generation [81.24633231919137]
The goal of text generation is to make machines express in human language.
Various neural encoder-decoder models have been proposed to achieve the goal by learning to map input text to output text.
To address this issue, researchers have considered incorporating various forms of knowledge beyond the input text into the generation models.
arXiv Detail & Related papers (2020-10-09T06:46:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.