The Generative AI Ethics Playbook
- URL: http://arxiv.org/abs/2501.10383v1
- Date: Tue, 17 Dec 2024 22:47:04 GMT
- Title: The Generative AI Ethics Playbook
- Authors: Jessie J. Smith, Wesley Hanwen Deng, William H. Smith, Maarten Sap, Nicole DeCario, Jesse Dodge,
- Abstract summary: Generative AI Ethics Playbook provides guidance for identifying and mitigating risks of machine learning systems.<n>Drawing on current best practices in both research and ethical considerations, this playbook aims to serve as a comprehensive resource for AI/ML practitioners.
- Score: 25.61087208802131
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Generative AI Ethics Playbook provides guidance for identifying and mitigating risks of machine learning systems across various domains, including natural language processing, computer vision, and generative AI. This playbook aims to assist practitioners in diagnosing potential harms that may arise during the design, development, and deployment of datasets and models. It offers concrete strategies and resources for mitigating these risks, to help minimize negative impacts on users and society. Drawing on current best practices in both research and ethical considerations, this playbook aims to serve as a comprehensive resource for AI/ML practitioners. The intended audience of this playbook includes machine learning researchers, engineers, and practitioners who are involved in the creation and implementation of generative and multimodal models (e.g., text-to-text, image-to-image, text-to-image, text-to-video). Specifically, we provide transparency/documentation checklists, topics of interest, common questions, examples of harms through case studies, and resources and strategies to mitigate harms throughout the Generative AI lifecycle. This playbook was made collaboratively over the course of 16 months through extensive literature review of over 100 resources and peer-reviewed articles, as well as through an initial group brainstorming session with 18 interdisciplinary AI ethics experts from industry and academia, and with additional feedback from 8 experts (5 of whom were in the initial brainstorming session). We note that while this playbook provides examples, discussion, and harm mitigation strategies, research in this area is ongoing. Our playbook aims to be a practically useful survey, taking a high-level view rather than aiming for covering the entire existing body of research.
Related papers
- A Practical Synthesis of Detecting AI-Generated Textual, Visual, and Audio Content [2.3543188414616534]
Advances in AI-generated content have led to wide adoption of large language models, diffusion-based visual generators, and synthetic audio tools.
These developments raise concerns about misinformation, copyright infringement, security threats, and the erosion of public trust.
This paper explores an extensive range of methods designed to detect and mitigate AI-generated textual, visual, and audio content.
arXiv Detail & Related papers (2025-04-02T23:27:55Z) - A Comprehensive Review of Adversarial Attacks on Machine Learning [0.5104264623877593]
This research provides a comprehensive overview of adversarial attacks on AI and ML models, exploring various attack types, techniques, and their potential harms.<n>To gain practical insights, we employ the Adversarial Robustness Toolbox (ART) library to simulate these attacks on real-world use cases, such as self-driving cars.
arXiv Detail & Related papers (2024-12-16T02:27:54Z) - Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy, Research, and Practice [186.055899073629]
Unlearning is often invoked as a solution for removing the effects of targeted information from a generative-AI model.
Unlearning is also proposed as a way to prevent a model from generating targeted types of information in its outputs.
Both of these goals--the targeted removal of information from a model and the targeted suppression of information from a model's outputs--present various technical and substantive challenges.
arXiv Detail & Related papers (2024-12-09T20:18:43Z) - Challenges and Responses in the Practice of Large Language Models [0.9463895540925061]
This paper carefully summarizes extensive and profound questions from all walks of life, focusing on the current high-profile AI field.
It covers multiple dimensions such as industry trends, academic research, technological innovation and business applications.
It specifically classifies and organizes these questions systematically and meticulously from the five core dimensions of computing power infrastructure, software architecture, data resources, application scenarios, and brain science.
arXiv Detail & Related papers (2024-08-18T09:15:11Z) - Mapping the Ethics of Generative AI: A Comprehensive Scoping Review [0.0]
We conduct a scoping review on the ethics of generative artificial intelligence, including especially large language models and text-to-image models.
Our analysis provides a taxonomy of 378 normative issues in 19 topic areas and ranks them according to their prevalence in the literature.
The study offers a comprehensive overview for scholars, practitioners, or policymakers, condensing the ethical debates surrounding fairness, safety, harmful content, hallucinations, privacy, interaction risks, security, alignment, societal impacts, and others.
arXiv Detail & Related papers (2024-02-13T09:38:17Z) - Beyond principlism: Practical strategies for ethical AI use in research practices [0.0]
The rapid adoption of generative artificial intelligence in scientific research has outpaced the development of ethical guidelines.
Existing approaches offer little practical guidance for addressing ethical challenges of AI in scientific research practices.
I propose a user-centered, realism-inspired approach to bridge the gap between abstract principles and day-to-day research practices.
arXiv Detail & Related papers (2024-01-27T03:53:25Z) - Identifying and Mitigating the Security Risks of Generative AI [179.2384121957896]
This paper reports the findings of a workshop held at Google on the dual-use dilemma posed by GenAI.
GenAI can be used just as well by attackers to generate new attacks and increase the velocity and efficacy of existing attacks.
We discuss short-term and long-term goals for the community on this topic.
arXiv Detail & Related papers (2023-08-28T18:51:09Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Stakeholder Participation in AI: Beyond "Add Diverse Stakeholders and
Stir" [76.44130385507894]
This paper aims to ground what we dub a 'participatory turn' in AI design by synthesizing existing literature on participation and through empirical analysis of its current practices.
Based on our literature synthesis and empirical research, this paper presents a conceptual framework for analyzing participatory approaches to AI design.
arXiv Detail & Related papers (2021-11-01T17:57:04Z) - Threat of Adversarial Attacks on Deep Learning in Computer Vision:
Survey II [86.51135909513047]
Deep Learning is vulnerable to adversarial attacks that can manipulate its predictions.
This article reviews the contributions made by the computer vision community in adversarial attacks on deep learning.
It provides definitions of technical terminologies for non-experts in this domain.
arXiv Detail & Related papers (2021-08-01T08:54:47Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Nose to Glass: Looking In to Get Beyond [0.0]
An increasing amount of research has been conducted under the banner of enhancing responsible artificial intelligence.
Research aims to address, alleviating, and eventually mitigating the harms brought on by the roll out of algorithmic systems.
However, implementation of such tools remains low.
arXiv Detail & Related papers (2020-11-26T06:51:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.