From Principles to Practices: Lessons Learned from Applying Partnership on AI's (PAI) Synthetic Media Framework to 11 Use Cases
- URL: http://arxiv.org/abs/2407.13025v2
- Date: Fri, 19 Jul 2024 15:57:35 GMT
- Title: From Principles to Practices: Lessons Learned from Applying Partnership on AI's (PAI) Synthetic Media Framework to 11 Use Cases
- Authors: Claire R. Leibowicz, Christian H. Cardona,
- Abstract summary: 2023 was the year the world woke up to generative AI, and 2024 is the year policymakers are responding more firmly.
This paper is the first known collection of diverse examples of the implementation of synthetic media governance.
It highlights areas synthetic media governance can be applied, augmented, expanded, and refined for use, in practice.
- Score: 1.2277343096128712
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: 2023 was the year the world woke up to generative AI, and 2024 is the year policymakers are responding more firmly. Importantly, this policy momentum is taking place alongside real world creation and distribution of synthetic media. Social media platforms, news organizations, dating apps, image generation companies, and more are already navigating a world of AI-generated visuals and sounds, already changing hearts and minds, as policymakers try to catch up. How, then, can AI governance capture the complexity of the synthetic media landscape? How can it attend to synthetic media's myriad uses, ranging from storytelling to privacy preservation, to deception, fraud, and defamation, taking into account the many stakeholders involved in its development, creation, and distribution? And what might it mean to govern synthetic media in a manner that upholds the truth while bolstering freedom of expression? What follows is the first known collection of diverse examples of the implementation of synthetic media governance that responds to these questions, specifically through Partnership on AI's (PAI) Responsible Practices for Synthetic Media - a voluntary, normative Framework for creating, distributing, and building technology for synthetic media responsibly, launched in February 2023. In this paper, we present a case bank of real world examples that help operationalize the Framework - highlighting areas synthetic media governance can be applied, augmented, expanded, and refined for use, in practice. Read together, the cases emphasize distinct elements of AI policymaking and seven emergent best practices supporting transparency, safety, expression, and digital dignity online: consent, disclosure, and differentiation between harmful and creative use cases.
Related papers
- Regulating Reality: Exploring Synthetic Media Through Multistakeholder AI Governance [1.450405446885067]
This paper analyzes 23 in-depth, semi-structured interviews with stakeholders governing synthetic media from across sectors.
It reveals key themes affecting synthetic media governance, including how temporal perspectives-spanning past, present, and future.
It also reveals the critical role of trust, both among stakeholders and between audiences and interventions.
arXiv Detail & Related papers (2025-02-06T21:56:16Z) - Technology as uncharted territory: Contextual integrity and the notion of AI as new ethical ground [55.2480439325792]
I argue that efforts to promote responsible and ethical AI can inadvertently contribute to and seemingly legitimize this disregard for established contextual norms.
I question the current narrow prioritization in AI ethics of moral innovation over moral preservation.
arXiv Detail & Related papers (2024-12-06T15:36:13Z) - Mapping AI Ethics Narratives: Evidence from Twitter Discourse Between 2015 and 2022 [6.518657832967228]
Twitter is selected in this paper to serve as an online public sphere for exploring discourse on AI ethics.
A research framework is proposed to demonstrate how to transform AI ethics-related discourse on Twitter into coherent and readable narratives.
arXiv Detail & Related papers (2024-06-20T09:08:44Z) - Guiding the Way: A Comprehensive Examination of AI Guidelines in Global Media [0.0]
This study analyzes 37 AI guidelines for media purposes in 17 countries.
Our analysis reveals key thematic areas, such as transparency, accountability, fairness, privacy, and the preservation of journalistic values.
Results highlight shared principles and best practices that emerge from these guidelines.
arXiv Detail & Related papers (2024-05-07T22:47:56Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - The Age of Synthetic Realities: Challenges and Opportunities [85.058932103181]
We highlight the crucial need for the development of forensic techniques capable of identifying harmful synthetic creations and distinguishing them from reality.
Our focus extends to various forms of media, such as images, videos, audio, and text, as we examine how synthetic realities are crafted and explore approaches to detecting these malicious creations.
This study is of paramount importance due to the rapid progress of AI generative techniques and their impact on the fundamental principles of Forensic Science.
arXiv Detail & Related papers (2023-06-09T15:55:10Z) - Aligning Artificial Intelligence with Humans through Public Policy [0.0]
This essay outlines research on AI that learn structures in policy data that can be leveraged for downstream tasks.
We believe this represents the "comprehension" phase of AI and policy, but leveraging policy as a key source of human values to align AI requires "understanding" policy.
arXiv Detail & Related papers (2022-06-25T21:31:14Z) - Worldwide AI Ethics: a review of 200 guidelines and recommendations for
AI governance [0.0]
This paper conducts a meta-analysis of 200 governance policies and ethical guidelines for AI usage published by public bodies, academic institutions, private companies, and civil society organizations worldwide.
We identify at least 17 resonating principles prevalent in the policies and guidelines of our dataset, released as an open-source database and tool.
We present the limitations of performing a global scale analysis study paired with a critical analysis of our findings, presenting areas of consensus that should be incorporated into future regulatory efforts.
arXiv Detail & Related papers (2022-06-23T18:03:04Z) - Confucius, Cyberpunk and Mr. Science: Comparing AI ethics between China
and the EU [41.19326726478421]
A major concern is the large homogeneity and presumed consensualism around these principles.
We propose to analyse and compare the ethical principles endorsed by the Chinese National New Generation Artificial Intelligence Governance Professional Committee and those elaborated by the European High-level Expert Group on AI (HLEGAI)
In our analysis, we wish to highlight that principles that seem similar a priori may actually have different meanings, derived from different approaches and reflect distinct goals.
arXiv Detail & Related papers (2021-11-15T06:38:33Z) - Trustworthy AI: From Principles to Practices [44.67324097900778]
Many current AI systems were found vulnerable to imperceptible attacks, biased against underrepresented groups, lacking in user privacy protection, etc.
In this review, we strive to provide AI practitioners a comprehensive guide towards building trustworthy AI systems.
To unify the current fragmented approaches towards trustworthy AI, we propose a systematic approach that considers the entire lifecycle of AI systems.
arXiv Detail & Related papers (2021-10-04T03:20:39Z) - The Short Anthropological Guide to the Study of Ethical AI [91.3755431537592]
Short guide serves as both an introduction to AI ethics and social science and anthropological perspectives on the development of AI.
Aims to provide those unfamiliar with the field with an insight into the societal impact of AI systems and how, in turn, these systems can lead us to rethink how our world operates.
arXiv Detail & Related papers (2020-10-07T12:25:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.