The Artificial Intelligence Disclosure (AID) Framework: An Introduction
- URL: http://arxiv.org/abs/2408.01904v1
- Date: Sun, 4 Aug 2024 02:18:42 GMT
- Title: The Artificial Intelligence Disclosure (AID) Framework: An Introduction
- Authors: Kari D. Weaver,
- Abstract summary: This article introduces The Artificial Intelligence Disclosure (AID) Framework, a standard, comprehensive, and detailed framework meant to inform the development and writing of GenAI disclosure for education and research.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: As the use of Generative Artificial Intelligence tools have grown in higher education and research, there have been increasing calls for transparency and granularity around the use and attribution of the use of these tools. Thus far, this need has been met via the recommended inclusion of a note, with little to no guidance on what the note itself should include. This has been identified as a problem to the use of AI in academic and research contexts. This article introduces The Artificial Intelligence Disclosure (AID) Framework, a standard, comprehensive, and detailed framework meant to inform the development and writing of GenAI disclosure for education and research.
Related papers
- Exploring the Impact of Generative Artificial Intelligence in Education: A Thematic Analysis [0.7701938856931689]
Large Language Models (LLMs) such as ChatGPT and Bard can be leveraged to automate boilerplate tasks.
It is important to develop guidelines, policies, and assessment methods in the education sector to ensure the responsible integration of these tools.
arXiv Detail & Related papers (2025-01-17T11:49:49Z) - Future of Information Retrieval Research in the Age of Generative AI [61.56371468069577]
In the fast-evolving field of information retrieval (IR), the integration of generative AI technologies such as large language models (LLMs) is transforming how users search for and interact with information.
Recognizing this paradigm shift, a visioning workshop was held in July 2024 to discuss the future of IR in the age of generative AI.
This report contains a summary of discussions as potentially important research topics and contains a list of recommendations for academics, industry practitioners, institutions, evaluation campaigns, and funding agencies.
arXiv Detail & Related papers (2024-12-03T00:01:48Z) - A Systematic Review of Generative AI for Teaching and Learning Practice [0.37282630026096586]
There are no agreed guidelines towards the usage of GenAI systems in higher education.
There is a need for additional interdisciplinary, multidimensional studies in HE through collaboration.
arXiv Detail & Related papers (2024-06-13T18:16:27Z) - Decoding the Digital Fine Print: Navigating the potholes in Terms of service/ use of GenAI tools against the emerging need for Transparent and Trustworthy Tech Futures [0.0]
The research investigates the crucial role of clear and intelligible terms of service in cultivating user trust and facilitating informed decision-making in the context of AI, in specific GenAI.
It highlights the obstacles presented by complex legal terminology and detailed fine print, which impede genuine user consent and recourse.
Findings indicate inconsistencies and variability in document quality, signaling a pressing demand for uniformity in disclosure practices.
arXiv Detail & Related papers (2024-03-26T04:54:53Z) - Perceptions and Detection of AI Use in Manuscript Preparation for
Academic Journals [1.881901067333374]
Large Language Models (LLMs) have produced both excitement and worry about how AI will impact academic writing.
Authors of academic publications may decide to voluntarily disclose any AI tools they use to revise their manuscripts.
journals and conferences could begin mandating disclosure and/or turn to using detection services.
arXiv Detail & Related papers (2023-11-19T06:04:46Z) - Identifying and Mitigating the Security Risks of Generative AI [179.2384121957896]
This paper reports the findings of a workshop held at Google on the dual-use dilemma posed by GenAI.
GenAI can be used just as well by attackers to generate new attacks and increase the velocity and efficacy of existing attacks.
We discuss short-term and long-term goals for the community on this topic.
arXiv Detail & Related papers (2023-08-28T18:51:09Z) - LLM-based Interaction for Content Generation: A Case Study on the
Perception of Employees in an IT department [85.1523466539595]
This paper presents a questionnaire survey to identify the intention to use generative tools by employees of an IT company.
Our results indicate a rather average acceptability of generative tools, although the more useful the tool is perceived to be, the higher the intention seems to be.
Our analyses suggest that the frequency of use of generative tools is likely to be a key factor in understanding how employees perceive these tools in the context of their work.
arXiv Detail & Related papers (2023-04-18T15:35:43Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - Directions for Explainable Knowledge-Enabled Systems [3.7250420821969827]
We leverage our survey of explanation literature in Artificial Intelligence and closely related fields to generate a set of explanation types.
We define each type and provide an example question that would motivate the need for this style of explanation.
We believe this set of explanation types will help future system designers in their generation and prioritization of requirements.
arXiv Detail & Related papers (2020-03-17T04:34:29Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.