Development of the ChatGPT, Generative Artificial Intelligence and
Natural Large Language Models for Accountable Reporting and Use (CANGARU)
Guidelines
- URL: http://arxiv.org/abs/2307.08974v1
- Date: Tue, 18 Jul 2023 05:12:52 GMT
- Title: Development of the ChatGPT, Generative Artificial Intelligence and
Natural Large Language Models for Accountable Reporting and Use (CANGARU)
Guidelines
- Authors: Giovanni E. Cacciamani, Michael B. Eppler, Conner Ganjavi, Asli Pekan,
Brett Biedermann, Gary S. Collins, Inderbir S. Gill
- Abstract summary: CANGARU aims to foster a cross-disciplinary global consensus on the ethical use, disclosure, and proper reporting of GAI/GPT/LLM technologies in academia.
The present protocol consists of an ongoing systematic review of GAI/GPT/LLM applications to understand the linked ideas, findings, and reporting standards in scholarly research, and to formulate guidelines for its use and disclosure.
- Score: 0.33249867230903685
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The swift progress and ubiquitous adoption of Generative AI (GAI), Generative
Pre-trained Transformers (GPTs), and large language models (LLMs) like ChatGPT,
have spurred queries about their ethical application, use, and disclosure in
scholarly research and scientific productions. A few publishers and journals
have recently created their own sets of rules; however, the absence of a
unified approach may lead to a 'Babel Tower Effect,' potentially resulting in
confusion rather than desired standardization. In response to this, we present
the ChatGPT, Generative Artificial Intelligence, and Natural Large Language
Models for Accountable Reporting and Use Guidelines (CANGARU) initiative, with
the aim of fostering a cross-disciplinary global inclusive consensus on the
ethical use, disclosure, and proper reporting of GAI/GPT/LLM technologies in
academia. The present protocol consists of four distinct parts: a) an ongoing
systematic review of GAI/GPT/LLM applications to understand the linked ideas,
findings, and reporting standards in scholarly research, and to formulate
guidelines for its use and disclosure, b) a bibliometric analysis of existing
author guidelines in journals that mention GAI/GPT/LLM, with the goal of
evaluating existing guidelines, analyzing the disparity in their
recommendations, and identifying common rules that can be brought into the
Delphi consensus process, c) a Delphi survey to establish agreement on the
items for the guidelines, ensuring principled GAI/GPT/LLM use, disclosure, and
reporting in academia, and d) the subsequent development and dissemination of
the finalized guidelines and their supplementary explanation and elaboration
documents.
Related papers
- Knowledge Graph Completion with Relation-Aware Anchor Enhancement [50.50944396454757]
We propose a relation-aware anchor enhanced knowledge graph completion method (RAA-KGC)
We first generate anchor entities within the relation-aware neighborhood of the head entity.
Then, by pulling the query embedding towards the neighborhoods of the anchors, it is tuned to be more discriminative for target entity matching.
arXiv Detail & Related papers (2025-04-08T15:22:08Z) - Systematic Task Exploration with LLMs: A Study in Citation Text Generation [63.50597360948099]
Large language models (LLMs) bring unprecedented flexibility in defining and executing complex, creative natural language generation (NLG) tasks.
We propose a three-component research framework that consists of systematic input manipulation, reference data, and output measurement.
We use this framework to explore citation text generation -- a popular scholarly NLP task that lacks consensus on the task definition and evaluation metric.
arXiv Detail & Related papers (2024-07-04T16:41:08Z) - Learnable Item Tokenization for Generative Recommendation [78.30417863309061]
We propose LETTER (a LEarnable Tokenizer for generaTivE Recommendation), which integrates hierarchical semantics, collaborative signals, and code assignment diversity.
LETTER incorporates Residual Quantized VAE for semantic regularization, a contrastive alignment loss for collaborative regularization, and a diversity loss to mitigate code assignment bias.
arXiv Detail & Related papers (2024-05-12T15:49:38Z) - LEARN: Knowledge Adaptation from Large Language Model to Recommendation for Practical Industrial Application [54.984348122105516]
Llm-driven knowlEdge Adaptive RecommeNdation (LEARN) framework synergizes open-world knowledge with collaborative knowledge.
We propose an Llm-driven knowlEdge Adaptive RecommeNdation (LEARN) framework that synergizes open-world knowledge with collaborative knowledge.
arXiv Detail & Related papers (2024-05-07T04:00:30Z) - GenRES: Rethinking Evaluation for Generative Relation Extraction in the
Era of Large Language Models [48.56814147033251]
We introduce GenRES for a multi-dimensional assessment in terms of the topic similarity, uniqueness, granularity, factualness, and completeness of the GRE results.
With GenRES, we empirically identified that precision/recall fails to justify the performance of GRE methods.
Next, we conducted a human evaluation of GRE methods that shows GenRES is consistent with human preferences for RE quality.
arXiv Detail & Related papers (2024-02-16T15:01:24Z) - Knowledge Graphs and Pre-trained Language Models enhanced Representation Learning for Conversational Recommender Systems [58.561904356651276]
We introduce the Knowledge-Enhanced Entity Representation Learning (KERL) framework to improve the semantic understanding of entities for Conversational recommender systems.
KERL uses a knowledge graph and a pre-trained language model to improve the semantic understanding of entities.
KERL achieves state-of-the-art results in both recommendation and response generation tasks.
arXiv Detail & Related papers (2023-12-18T06:41:23Z) - Can GPT models Follow Human Summarization Guidelines? Evaluating ChatGPT
and GPT-4 for Dialogue Summarization [2.6321077922557192]
This study explores the capabilities of prompt-driven Large Language Models (LLMs) like ChatGPT and GPT-4 in adhering to human guidelines for dialogue summarization.
Our findings indicate that GPT models often produce lengthy summaries and deviate from human summarization guidelines.
Using human guidelines as an intermediate step shows promise, outperforming direct word-length constraint prompts in some cases.
arXiv Detail & Related papers (2023-10-25T17:39:07Z) - Exploring Large Language Model for Graph Data Understanding in Online
Job Recommendations [63.19448893196642]
We present a novel framework that harnesses the rich contextual information and semantic representations provided by large language models to analyze behavior graphs.
By leveraging this capability, our framework enables personalized and accurate job recommendations for individual users.
arXiv Detail & Related papers (2023-07-10T11:29:41Z) - Ethical Aspects of ChatGPT in Software Engineering Research [4.0594888788503205]
ChatGPT can improve Software Engineering (SE) research practices by offering efficient, accessible information analysis and synthesis based on natural language interactions.
However, ChatGPT could bring ethical challenges, encompassing plagiarism, privacy, data security, and the risk of generating biased or potentially detrimental data.
This research aims to fill the given gap by elaborating on the key elements: motivators, demotivators, and ethical principles of using ChatGPT in SE research.
arXiv Detail & Related papers (2023-06-13T06:13:21Z) - Large-Scale Text Analysis Using Generative Language Models: A Case Study
in Discovering Public Value Expressions in AI Patents [2.246222223318928]
This paper employs a novel approach using a generative language model (GPT-4) to produce labels and rationales for large-scale text analysis.
We collect a database comprising 154,934 patent documents using an advanced Boolean query submitted to InnovationQ+.
We design a framework for identifying and labeling public value expressions in these AI patent sentences.
arXiv Detail & Related papers (2023-05-17T17:18:26Z) - Sparks of Artificial General Recommender (AGR): Early Experiments with
ChatGPT [33.424692414746836]
An AGR comprises both conversationality and universality to engage in natural dialogues and generate recommendations across various domains.
We propose ten fundamental principles that an AGR should adhere to, each with its corresponding testing protocols.
We assess whether ChatGPT, a sophisticated LLM, can comply with the proposed principles by engaging in recommendation-oriented dialogues with the model while observing its behavior.
arXiv Detail & Related papers (2023-05-08T07:28:16Z) - FAIR for AI: An interdisciplinary and international community building
perspective [19.2239109259925]
FAIR principles were proposed in 2016 as prerequisites for proper data management and stewardship.
The FAIR principles have been re-interpreted or extended to include the software, tools, algorithms, and datasets that produce data.
This report builds on the FAIR for AI Workshop held at Argonne National Laboratory on June 7, 2022.
arXiv Detail & Related papers (2022-09-30T22:05:46Z) - Treebanking User-Generated Content: a UD Based Overview of Guidelines,
Corpora and Unified Recommendations [58.50167394354305]
This article presents a discussion on the main linguistic phenomena which cause difficulties in the analysis of user-generated texts found on the web and in social media.
It proposes a set of tentative UD-based annotation guidelines to promote consistent treatment of the particular phenomena found in these types of texts.
arXiv Detail & Related papers (2020-11-03T23:34:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.