A Pathway Towards Responsible AI Generated Content
- URL: http://arxiv.org/abs/2303.01325v3
- Date: Wed, 27 Dec 2023 08:21:12 GMT
- Title: A Pathway Towards Responsible AI Generated Content
- Authors: Chen Chen, Jie Fu, Lingjuan Lyu
- Abstract summary: We focus on 8 main concerns that may hinder the healthy development and deployment of AIGC in practice.
These concerns include risks from (1) privacy; (2) bias, toxicity, misinformation; (3) intellectual property (IP); (4) robustness; (5) open source and explanation; (6) technology abuse; (7) consent, credit, and compensation; (8) environment.
- Score: 68.13835802977125
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: AI Generated Content (AIGC) has received tremendous attention within the past
few years, with content generated in the format of image, text, audio, video,
etc. Meanwhile, AIGC has become a double-edged sword and recently received much
criticism regarding its responsible usage. In this article, we focus on 8 main
concerns that may hinder the healthy development and deployment of AIGC in
practice, including risks from (1) privacy; (2) bias, toxicity, misinformation;
(3) intellectual property (IP); (4) robustness; (5) open source and
explanation; (6) technology abuse; (7) consent, credit, and compensation; (8)
environment. Additionally, we provide insights into the promising directions
for tackling these risks while constructing generative models, enabling AIGC to
be used more responsibly to truly benefit society.
Related papers
- Risks and Opportunities of Open-Source Generative AI [64.86989162783648]
Applications of Generative AI (Gen AI) are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about the potential risks of the technology, and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source generative AI.
arXiv Detail & Related papers (2024-05-14T13:37:36Z) - Near to Mid-term Risks and Opportunities of Open-Source Generative AI [94.06233419171016]
Applications of Generative AI are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about potential risks and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source Generative AI.
arXiv Detail & Related papers (2024-04-25T21:14:24Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
Particip-AI is a framework to gather current and future AI use cases and their harms and benefits from non-expert public.
We gather responses from 295 demographically diverse participants.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Ten Hard Problems in Artificial Intelligence We Must Get Right [72.99597122935903]
We explore the AI2050 "hard problems" that block the promise of AI and cause AI risks.
For each problem, we outline the area, identify significant recent work, and suggest ways forward.
arXiv Detail & Related papers (2024-02-06T23:16:41Z) - What's my role? Modelling responsibility for AI-based safety-critical
systems [1.0549609328807565]
It is difficult for developers and manufacturers to be held responsible for harmful behaviour of an AI-SCS.
A human operator can become a "liability sink" absorbing blame for the consequences of AI-SCS outputs they weren't responsible for creating.
This paper considers different senses of responsibility (role, moral, legal and causal), and how they apply in the context of AI-SCS safety.
arXiv Detail & Related papers (2023-12-30T13:45:36Z) - "There Has To Be a Lot That We're Missing": Moderating AI-Generated Content on Reddit [5.202496456440801]
We focus on online community moderators' experiences with AI-generated content (AIGC)
Our study finds communities are choosing to enact rules restricting use of AIGC for both ideological and practical reasons.
Our results highlight the importance of supporting community autonomy and self-determination in the face of this sudden technological change.
arXiv Detail & Related papers (2023-11-21T16:15:21Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Challenges and Remedies to Privacy and Security in AIGC: Exploring the
Potential of Privacy Computing, Blockchain, and Beyond [17.904983070032884]
We review the concept, classification and underlying technologies of AIGC.
We discuss the privacy and security challenges faced by AIGC from multiple perspectives.
arXiv Detail & Related papers (2023-06-01T07:49:22Z) - A Survey on ChatGPT: AI-Generated Contents, Challenges, and Solutions [19.50785795365068]
AIGC uses generative large AI algorithms to assist humans in creating massive, high-quality, and human-like content at a faster pace and lower cost.
This paper presents an in-depth survey of working principles, security and privacy threats, state-of-the-art solutions, and future challenges of the AIGC paradigm.
arXiv Detail & Related papers (2023-05-25T15:09:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.