How Does the Disclosure of AI Assistance Affect the Perceptions of Writing?
- URL: http://arxiv.org/abs/2410.04545v1
- Date: Sun, 6 Oct 2024 16:45:33 GMT
- Title: How Does the Disclosure of AI Assistance Affect the Perceptions of Writing?
- Authors: Zhuoyan Li, Chen Liang, Jing Peng, Ming Yin,
- Abstract summary: We study whether and how the disclosure of the level and type of AI assistance in the writing process would affect people's perceptions of the writing.
Our results suggest that disclosing the AI assistance in the writing process, especially if AI has provided assistance in generating new content, decreases the average quality ratings.
- Score: 29.068596156140913
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in generative AI technologies like large language models have boosted the incorporation of AI assistance in writing workflows, leading to the rise of a new paradigm of human-AI co-creation in writing. To understand how people perceive writings that are produced under this paradigm, in this paper, we conduct an experimental study to understand whether and how the disclosure of the level and type of AI assistance in the writing process would affect people's perceptions of the writing on various aspects, including their evaluation on the quality of the writing and their ranking of different writings. Our results suggest that disclosing the AI assistance in the writing process, especially if AI has provided assistance in generating new content, decreases the average quality ratings for both argumentative essays and creative stories. This decrease in the average quality ratings often comes with an increased level of variations in different individuals' quality evaluations of the same writing. Indeed, factors such as an individual's writing confidence and familiarity with AI writing assistants are shown to moderate the impact of AI assistance disclosure on their writing quality evaluations. We also find that disclosing the use of AI assistance may significantly reduce the proportion of writings produced with AI's content generation assistance among the top-ranked writings.
Related papers
- "It was 80% me, 20% AI": Seeking Authenticity in Co-Writing with Large Language Models [97.22914355737676]
We examine whether and how writers want to preserve their authentic voice when co-writing with AI tools.
Our findings illuminate conceptions of authenticity in human-AI co-creation.
Readers' responses showed less concern about human-AI co-writing.
arXiv Detail & Related papers (2024-11-20T04:42:32Z) - Human Bias in the Face of AI: The Role of Human Judgement in AI Generated Text Evaluation [48.70176791365903]
This study explores how bias shapes the perception of AI versus human generated content.
We investigated how human raters respond to labeled and unlabeled content.
arXiv Detail & Related papers (2024-09-29T04:31:45Z) - Decoding AI and Human Authorship: Nuances Revealed Through NLP and Statistical Analysis [0.0]
This research explores the nuanced differences in texts produced by AI and those written by humans.
The study investigates various linguistic traits, patterns of creativity, and potential biases inherent in human-written and AI- generated texts.
arXiv Detail & Related papers (2024-07-15T18:09:03Z) - The Great AI Witch Hunt: Reviewers Perception and (Mis)Conception of Generative AI in Research Writing [25.73744132026804]
Generative AI (GenAI) use in research writing is growing fast.
It is unclear how peer reviewers recognize or misjudge AI-augmented manuscripts.
Our findings indicate that while AI-augmented writing improves readability, language diversity, and informativeness, it often lacks research details and reflective insights from authors.
arXiv Detail & Related papers (2024-06-27T02:38:25Z) - The Dearth of the Author in AI-Supported Writing [2.113447936889669]
We argue that the dearth of the author helps to explain a number of recurring difficulties and anxieties around AI-based writing support tools.
It also suggests an ambitious new goal for AI-based writing support tools.
arXiv Detail & Related papers (2024-04-16T05:23:03Z) - Techniques for supercharging academic writing with generative AI [0.0]
This Perspective maps out principles and methods for using generative artificial intelligence (AI) to elevate the quality and efficiency of academic writing.
We introduce a human-AI collaborative framework that delineates the rationale (why), process (how), and nature (what) of AI engagement in writing.
arXiv Detail & Related papers (2023-10-26T04:35:00Z) - PaperCard for Reporting Machine Assistance in Academic Writing [48.33722012818687]
ChatGPT, a question-answering system released by OpenAI in November 2022, has demonstrated a range of capabilities that could be utilised in producing academic papers.
This raises critical questions surrounding the concept of authorship in academia.
We propose a framework we name "PaperCard", a documentation for human authors to transparently declare the use of AI in their writing process.
arXiv Detail & Related papers (2023-10-07T14:28:04Z) - The Future of AI-Assisted Writing [0.0]
We conduct a comparative user-study between such tools from an information retrieval lens: pull and push.
Our findings show that users welcome seamless assistance of AI in their writing.
Users also enjoyed the collaboration with AI-assisted writing tools and did not feel a lack of ownership.
arXiv Detail & Related papers (2023-06-29T02:46:45Z) - AI, write an essay for me: A large-scale comparison of human-written
versus ChatGPT-generated essays [66.36541161082856]
ChatGPT and similar generative AI models have attracted hundreds of millions of users.
This study compares human-written versus ChatGPT-generated argumentative student essays.
arXiv Detail & Related papers (2023-04-24T12:58:28Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - An Exploration of Post-Editing Effectiveness in Text Summarization [58.99765574294715]
"Post-editing" AI-generated text reduces human workload and improves the quality of AI output.
We compare post-editing provided summaries with manual summarization for summary quality, human efficiency, and user experience.
This study sheds valuable insights on when post-editing is useful for text summarization.
arXiv Detail & Related papers (2022-06-13T18:00:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.