Mapping the Ethics of Generative AI: A Comprehensive Scoping Review
- URL: http://arxiv.org/abs/2402.08323v1
- Date: Tue, 13 Feb 2024 09:38:17 GMT
- Title: Mapping the Ethics of Generative AI: A Comprehensive Scoping Review
- Authors: Thilo Hagendorff
- Abstract summary: We conduct a scoping review on the ethics of generative artificial intelligence, including especially large language models and text-to-image models.
Our analysis provides a taxonomy of 378 normative issues in 19 topic areas and ranks them according to their prevalence in the literature.
The study offers a comprehensive overview for scholars, practitioners, or policymakers, condensing the ethical debates surrounding fairness, safety, harmful content, hallucinations, privacy, interaction risks, security, alignment, societal impacts, and others.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The advent of generative artificial intelligence and the widespread adoption
of it in society engendered intensive debates about its ethical implications
and risks. These risks often differ from those associated with traditional
discriminative machine learning. To synthesize the recent discourse and map its
normative concepts, we conducted a scoping review on the ethics of generative
artificial intelligence, including especially large language models and
text-to-image models. Our analysis provides a taxonomy of 378 normative issues
in 19 topic areas and ranks them according to their prevalence in the
literature. The study offers a comprehensive overview for scholars,
practitioners, or policymakers, condensing the ethical debates surrounding
fairness, safety, harmful content, hallucinations, privacy, interaction risks,
security, alignment, societal impacts, and others. We discuss the results,
evaluate imbalances in the literature, and explore unsubstantiated risk
scenarios.
Related papers
- Risks and NLP Design: A Case Study on Procedural Document QA [52.557503571760215]
We argue that clearer assessments of risks and harms to users will be possible when we specialize the analysis to more concrete applications and their plausible users.
We conduct a risk-oriented error analysis that could then inform the design of a future system to be deployed with lower risk of harm and better performance.
arXiv Detail & Related papers (2024-08-16T17:23:43Z) - An ethical study of generative AI from the Actor-Network Theory perspective [3.0224187843434]
We analyze ChatGPT as a case study within the framework of Actor-Network Theory.
We examine the actors and processes of translation involved in the ethical issues related to ChatGPT.
arXiv Detail & Related papers (2024-04-10T02:32:19Z) - The Social Impact of Generative AI: An Analysis on ChatGPT [0.7401425472034117]
The rapid development of Generative AI models has sparked heated discussions regarding their benefits, limitations, and associated risks.
Generative models hold immense promise across multiple domains, such as healthcare, finance, and education, to cite a few.
This paper adopts a methodology to delve into the societal implications of Generative AI tools, focusing primarily on the case of ChatGPT.
arXiv Detail & Related papers (2024-03-07T17:14:22Z) - Unpacking the Ethical Value Alignment in Big Models [46.560886177083084]
This paper provides an overview of the risks and challenges associated with big models, surveys existing AI ethics guidelines, and examines the ethical implications arising from the limitations of these models.
We introduce a novel conceptual paradigm for aligning the ethical values of big models and discuss promising research directions for alignment criteria, evaluation, and method.
arXiv Detail & Related papers (2023-10-26T16:45:40Z) - Fairness And Bias in Artificial Intelligence: A Brief Survey of Sources,
Impacts, And Mitigation Strategies [11.323961700172175]
This survey paper offers a succinct, comprehensive overview of fairness and bias in AI.
We review sources of bias, such as data, algorithm, and human decision biases.
We assess the societal impact of biased AI systems, focusing on the perpetuation of inequalities and the reinforcement of harmful stereotypes.
arXiv Detail & Related papers (2023-04-16T03:23:55Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - Ethical-Advice Taker: Do Language Models Understand Natural Language
Interventions? [62.74872383104381]
We investigate the effectiveness of natural language interventions for reading-comprehension systems.
We propose a new language understanding task, Linguistic Ethical Interventions (LEI), where the goal is to amend a question-answering (QA) model's unethical behavior.
arXiv Detail & Related papers (2021-06-02T20:57:58Z) - Reasons, Values, Stakeholders: A Philosophical Framework for Explainable
Artificial Intelligence [0.0]
This paper offers a multi-faceted framework that brings more conceptual precision to the present debate.
It identifies the types of explanations that are most pertinent to artificial intelligence predictions.
It also recognizes the relevance and importance of social and ethical values for the evaluation of these explanations.
arXiv Detail & Related papers (2021-03-01T04:50:31Z) - Scruples: A Corpus of Community Ethical Judgments on 32,000 Real-Life
Anecdotes [72.64975113835018]
Motivated by descriptive ethics, we investigate a novel, data-driven approach to machine ethics.
We introduce Scruples, the first large-scale dataset with 625,000 ethical judgments over 32,000 real-life anecdotes.
Our dataset presents a major challenge to state-of-the-art neural language models, leaving significant room for improvement.
arXiv Detail & Related papers (2020-08-20T17:34:15Z) - On Consequentialism and Fairness [64.35872952140677]
We provide a consequentialist critique of common definitions of fairness within machine learning.
We conclude with a broader discussion of the issues of learning and randomization.
arXiv Detail & Related papers (2020-01-02T05:39:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.