AI Usage Cards: Responsibly Reporting AI-generated Content
- URL: http://arxiv.org/abs/2303.03886v2
- Date: Tue, 9 May 2023 15:11:17 GMT
- Title: AI Usage Cards: Responsibly Reporting AI-generated Content
- Authors: Jan Philip Wahle and Terry Ruas and Saif M. Mohammad and Norman
Meuschke and Bela Gipp
- Abstract summary: Given AI systems like ChatGPT can generate content that is indistinguishable from human-made work, the responsible use of this technology is a growing concern.
We propose a three-dimensional model consisting of transparency, integrity, and accountability to define the responsible use of AI.
Second, we introduce AI Usage Cards'', a standardized way to report the use of AI in scientific research.
- Score: 25.848910414962337
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Given AI systems like ChatGPT can generate content that is indistinguishable
from human-made work, the responsible use of this technology is a growing
concern. Although understanding the benefits and harms of using AI systems
requires more time, their rapid and indiscriminate adoption in practice is a
reality. Currently, we lack a common framework and language to define and
report the responsible use of AI for content generation. Prior work proposed
guidelines for using AI in specific scenarios (e.g., robotics or medicine)
which are not transferable to conducting and reporting scientific research. Our
work makes two contributions: First, we propose a three-dimensional model
consisting of transparency, integrity, and accountability to define the
responsible use of AI. Second, we introduce ``AI Usage Cards'', a standardized
way to report the use of AI in scientific research. Our model and cards allow
users to reflect on key principles of responsible AI usage. They also help the
research community trace, compare, and question various forms of AI usage and
support the development of accepted community norms. The proposed framework and
reporting system aims to promote the ethical and responsible use of AI in
scientific research and provide a standardized approach for reporting AI usage
across different research fields. We also provide a free service to easily
generate AI Usage Cards for scientific work via a questionnaire and export them
in various machine-readable formats for inclusion in different work products at
https://ai-cards.org.
Related papers
- Participatory Approaches in AI Development and Governance: A Principled Approach [9.271573427680087]
This paper forms the first part of a two-part series on participatory governance in AI.
It advances the premise that a participatory approach is beneficial to building and using more responsible, safe, and human-centric AI systems.
arXiv Detail & Related papers (2024-06-03T09:49:42Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Trust, Accountability, and Autonomy in Knowledge Graph-based AI for
Self-determination [1.4305544869388402]
Knowledge Graphs (KGs) have emerged as fundamental platforms for powering intelligent decision-making.
The integration of KGs with neuronal learning is currently a topic of active research.
This paper conceptualises the foundational topics and research pillars to support KG-based AI for self-determination.
arXiv Detail & Related papers (2023-10-30T12:51:52Z) - Model Reporting for Certifiable AI: A Proposal from Merging EU
Regulation into AI Development [2.9620297386658185]
Despite large progress in Explainable and Safe AI, practitioners suffer from a lack of regulation and standards for AI safety.
We propose the use of standardized cards to document AI applications throughout the development process.
arXiv Detail & Related papers (2023-07-21T12:13:54Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Measuring Ethics in AI with AI: A Methodology and Dataset Construction [1.6861004263551447]
We propose to use such newfound capabilities of AI technologies to augment our AI measuring capabilities.
We do so by training a model to classify publications related to ethical issues and concerns.
We highlight the implications of AI metrics, in particular their contribution towards developing trustful and fair AI-based tools and technologies.
arXiv Detail & Related papers (2021-07-26T00:26:12Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - Questioning the AI: Informing Design Practices for Explainable AI User
Experiences [33.81809180549226]
A surge of interest in explainable AI (XAI) has led to a vast collection of algorithmic work on the topic.
We seek to identify gaps between the current XAI algorithmic work and practices to create explainable AI products.
We develop an algorithm-informed XAI question bank in which user needs for explainability are represented.
arXiv Detail & Related papers (2020-01-08T12:34:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.