A Pathway Towards Responsible AI Generated Content
- URL: http://arxiv.org/abs/2303.01325v3
- Date: Wed, 27 Dec 2023 08:21:12 GMT
- Title: A Pathway Towards Responsible AI Generated Content
- Authors: Chen Chen, Jie Fu, Lingjuan Lyu
- Abstract summary: We focus on 8 main concerns that may hinder the healthy development and deployment of AIGC in practice.
These concerns include risks from (1) privacy; (2) bias, toxicity, misinformation; (3) intellectual property (IP); (4) robustness; (5) open source and explanation; (6) technology abuse; (7) consent, credit, and compensation; (8) environment.
- Score: 68.13835802977125
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: AI Generated Content (AIGC) has received tremendous attention within the past
few years, with content generated in the format of image, text, audio, video,
etc. Meanwhile, AIGC has become a double-edged sword and recently received much
criticism regarding its responsible usage. In this article, we focus on 8 main
concerns that may hinder the healthy development and deployment of AIGC in
practice, including risks from (1) privacy; (2) bias, toxicity, misinformation;
(3) intellectual property (IP); (4) robustness; (5) open source and
explanation; (6) technology abuse; (7) consent, credit, and compensation; (8)
environment. Additionally, we provide insights into the promising directions
for tackling these risks while constructing generative models, enabling AIGC to
be used more responsibly to truly benefit society.
Related papers
- The AI Risk Repository: A Comprehensive Meta-Review, Database, and Taxonomy of Risks From Artificial Intelligence [35.77247656798871]
The risks posed by Artificial Intelligence (AI) are of considerable concern to academics, auditors, policymakers, AI companies, and the public.
A lack of shared understanding of AI risks can impede our ability to comprehensively discuss, research, and react to them.
This paper addresses this gap by creating an AI Risk Repository to serve as a common frame of reference.
arXiv Detail & Related papers (2024-08-14T10:32:06Z) - Risks and Opportunities of Open-Source Generative AI [64.86989162783648]
Applications of Generative AI (Gen AI) are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about the potential risks of the technology, and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source generative AI.
arXiv Detail & Related papers (2024-05-14T13:37:36Z) - Near to Mid-term Risks and Opportunities of Open-Source Generative AI [94.06233419171016]
Applications of Generative AI are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about potential risks and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source Generative AI.
arXiv Detail & Related papers (2024-04-25T21:14:24Z) - Ten Hard Problems in Artificial Intelligence We Must Get Right [72.99597122935903]
We explore the AI2050 "hard problems" that block the promise of AI and cause AI risks.
For each problem, we outline the area, identify significant recent work, and suggest ways forward.
arXiv Detail & Related papers (2024-02-06T23:16:41Z) - "There Has To Be a Lot That We're Missing": Moderating AI-Generated Content on Reddit [5.202496456440801]
Generative AI has begun to alter how we work, learn, communicate, and participate in online communities.
We focus on online community moderators' experiences with AI-generated content (AIGC)
Our study finds that rules about AIGC are motivated by concerns about content quality, social dynamics, and governance challenges.
We find that, despite the absence of foolproof tools for detecting AIGC, moderators were able to somewhat limit the disruption caused by this new phenomenon by working with their communities to clarify norms.
arXiv Detail & Related papers (2023-11-21T16:15:21Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Challenges and Remedies to Privacy and Security in AIGC: Exploring the
Potential of Privacy Computing, Blockchain, and Beyond [17.904983070032884]
We review the concept, classification and underlying technologies of AIGC.
We discuss the privacy and security challenges faced by AIGC from multiple perspectives.
arXiv Detail & Related papers (2023-06-01T07:49:22Z) - A Survey on ChatGPT: AI-Generated Contents, Challenges, and Solutions [19.50785795365068]
AIGC uses generative large AI algorithms to assist humans in creating massive, high-quality, and human-like content at a faster pace and lower cost.
This paper presents an in-depth survey of working principles, security and privacy threats, state-of-the-art solutions, and future challenges of the AIGC paradigm.
arXiv Detail & Related papers (2023-05-25T15:09:11Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.