The Great AI Witch Hunt: Reviewers Perception and (Mis)Conception of Generative AI in Research Writing
- URL: http://arxiv.org/abs/2407.12015v1
- Date: Thu, 27 Jun 2024 02:38:25 GMT
- Title: The Great AI Witch Hunt: Reviewers Perception and (Mis)Conception of Generative AI in Research Writing
- Authors: Hilda Hadan, Derrick Wang, Reza Hadi Mogavi, Joseph Tu, Leah Zhang-Kennedy, Lennart E. Nacke,
- Abstract summary: Generative AI (GenAI) use in research writing is growing fast.
It is unclear how peer reviewers recognize or misjudge AI-augmented manuscripts.
Our findings indicate that while AI-augmented writing improves readability, language diversity, and informativeness, it often lacks research details and reflective insights from authors.
- Score: 25.73744132026804
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative AI (GenAI) use in research writing is growing fast. However, it is unclear how peer reviewers recognize or misjudge AI-augmented manuscripts. To investigate the impact of AI-augmented writing on peer reviews, we conducted a snippet-based online survey with 17 peer reviewers from top-tier HCI conferences. Our findings indicate that while AI-augmented writing improves readability, language diversity, and informativeness, it often lacks research details and reflective insights from authors. Reviewers consistently struggled to distinguish between human and AI-augmented writing but their judgements remained consistent. They noted the loss of a "human touch" and subjective expressions in AI-augmented writing. Based on our findings, we advocate for reviewer guidelines that promote impartial evaluations of submissions, regardless of any personal biases towards GenAI. The quality of the research itself should remain a priority in reviews, regardless of any preconceived notions about the tools used to create it. We emphasize that researchers must maintain their authorship and control over the writing process, even when using GenAI's assistance.
Related papers
- How Does the Disclosure of AI Assistance Affect the Perceptions of Writing? [29.068596156140913]
We study whether and how the disclosure of the level and type of AI assistance in the writing process would affect people's perceptions of the writing.
Our results suggest that disclosing the AI assistance in the writing process, especially if AI has provided assistance in generating new content, decreases the average quality ratings.
arXiv Detail & Related papers (2024-10-06T16:45:33Z) - Human Bias in the Face of AI: The Role of Human Judgement in AI Generated Text Evaluation [48.70176791365903]
This study explores how bias shapes the perception of AI versus human generated content.
We investigated how human raters respond to labeled and unlabeled content.
arXiv Detail & Related papers (2024-09-29T04:31:45Z) - Disclosure of AI-Generated News Increases Engagement but Does Not Reduce Aversion, Despite Positive Quality Ratings [3.036383058306671]
This study investigates the perceived quality of AI-assisted and AI-generated versus human-generated news articles.
We employ a survey experiment with 599 participants from the German-speaking part of Switzerland.
Our results indicate that all news articles, regardless of whether they were written by journalists or AI, were perceived to be of equal quality.
arXiv Detail & Related papers (2024-09-05T13:12:16Z) - What Can Natural Language Processing Do for Peer Review? [173.8912784451817]
In modern science, peer review is widely used, yet it is hard, time-consuming, and prone to error.
Since the artifacts involved in peer review are largely text-based, Natural Language Processing has great potential to improve reviewing.
We detail each step of the process from manuscript submission to camera-ready revision, and discuss the associated challenges and opportunities for NLP assistance.
arXiv Detail & Related papers (2024-05-10T16:06:43Z) - A Literature Review of Literature Reviews in Pattern Analysis and Machine Intelligence [58.6354685593418]
This paper proposes several article-level, field-normalized, and large language model-empowered bibliometric indicators to evaluate reviews.
The newly emerging AI-generated literature reviews are also appraised.
This work offers insights into the current challenges of literature reviews and envisions future directions for their development.
arXiv Detail & Related papers (2024-02-20T11:28:50Z) - A Critical Examination of the Ethics of AI-Mediated Peer Review [0.0]
Recent advancements in artificial intelligence (AI) systems offer promise and peril for scholarly peer review.
Human peer review systems are also fraught with related problems, such as biases, abuses, and a lack of transparency.
The legitimacy of AI-driven peer review hinges on the alignment with the scientific ethos.
arXiv Detail & Related papers (2023-09-02T18:14:10Z) - The Future of AI-Assisted Writing [0.0]
We conduct a comparative user-study between such tools from an information retrieval lens: pull and push.
Our findings show that users welcome seamless assistance of AI in their writing.
Users also enjoyed the collaboration with AI-assisted writing tools and did not feel a lack of ownership.
arXiv Detail & Related papers (2023-06-29T02:46:45Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - Can We Automate Scientific Reviewing? [89.50052670307434]
We discuss the possibility of using state-of-the-art natural language processing (NLP) models to generate first-pass peer reviews for scientific papers.
We collect a dataset of papers in the machine learning domain, annotate them with different aspects of content covered in each review, and train targeted summarization models that take in papers to generate reviews.
Comprehensive experimental results show that system-generated reviews tend to touch upon more aspects of the paper than human-written reviews.
arXiv Detail & Related papers (2021-01-30T07:16:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.