Patterns and Purposes: A Cross-Journal Analysis of AI Tool Usage in Academic Writing
- URL: http://arxiv.org/abs/2502.00632v1
- Date: Sun, 02 Feb 2025 02:44:33 GMT
- Title: Patterns and Purposes: A Cross-Journal Analysis of AI Tool Usage in Academic Writing
- Authors: Ziyang Xu,
- Abstract summary: This study analyzed 168 AI declarations from 8,859 articles across 27 categories.
ChatGPT dominates academic writing assistance (77% usage), with significant differences in tool usage between native and non-native English speakers.
The study reveals that improving readability (51%) and grammar checking (22%) are the primary purposes of AI tool usage.
- Score: 0.266512000865131
- License:
- Abstract: This study investigates the use of AI tools in academic writing through analysis of AI usage declarations in journals. Using a mixed-methods approach combining content analysis, statistical analysis, and text mining, this research analyzed 168 AI declarations from 8,859 articles across 27 categories. Results show that ChatGPT dominates academic writing assistance (77% usage), with significant differences in tool usage between native and non-native English speakers (p = 0.0483) and between international and non-international teams (p = 0.0012). The study reveals that improving readability (51%) and grammar checking (22%) are the primary purposes of AI tool usage. These findings provide insights for journal policy development and understanding the evolving role of AI in academic writing.
Related papers
- Automatic answering of scientific questions using the FACTS-V1 framework: New methods in research to increase efficiency through the use of AI [0.0]
This article presents the prototype of the FACTS-V1 (Filtering and Analysis of Content in Textual Sources) framework.
With the help of the application, numerous scientific papers can be automatically extracted, analyzed and interpreted from open access document servers.
The aim of the framework is to provide recommendations for future scientific questions based on existing data.
arXiv Detail & Related papers (2024-12-01T18:55:39Z) - You Shall Know a Tool by the Traces it Leaves: The Predictability of Sentiment Analysis Tools [74.98850427240464]
We show that sentiment analysis tools disagree on the same dataset.
We show that the sentiment tool used for sentiment annotation can even be predicted from its outcome.
arXiv Detail & Related papers (2024-10-18T17:27:38Z) - Differentiating between human-written and AI-generated texts using linguistic features automatically extracted from an online computational tool [0.0]
This study aims to investigate how various linguistic components are represented in both types of texts, assessing the ability of AI to emulate human writing.
Despite AI-generated texts appearing to mimic human speech, the results revealed significant differences across multiple linguistic features.
arXiv Detail & Related papers (2024-07-04T05:37:09Z) - Keystroke Dynamics Against Academic Dishonesty in the Age of LLMs [25.683026758476835]
This study proposes a keystroke dynamics-based method to differentiate between bona fide and assisted writing.
To facilitate this, a dataset was developed to capture the keystroke patterns of individuals engaged in writing tasks.
The detector, trained using a modified TypeNet architecture, achieved accuracies ranging from 74.98% to 85.72% in condition-specific scenarios and from 52.24% to 80.54% in condition-agnostic scenarios.
arXiv Detail & Related papers (2024-06-21T17:51:26Z) - Arabic Text Sentiment Analysis: Reinforcing Human-Performed Surveys with
Wider Topic Analysis [49.1574468325115]
The in-depth study manually analyses 133 ASA papers published in the English language between 2002 and 2020.
The main findings show the different approaches used for ASA: machine learning, lexicon-based and hybrid approaches.
There is a need to develop ASA tools that can be used in industry, as well as in academia, for Arabic text SA.
arXiv Detail & Related papers (2024-03-04T10:37:48Z) - Quantitative Analysis of AI-Generated Texts in Academic Research: A Study of AI Presence in Arxiv Submissions using AI Detection Tool [0.0]
This study will analyze a method that can see purposely manufactured content that academic organizations use to post on Arxiv.
The statistical analysis shows that Originality.ai is very accurate, with a rate of 98%.
arXiv Detail & Related papers (2024-02-09T17:20:48Z) - ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving [170.7899683843177]
ToRA is a series of Tool-integrated Reasoning Agents designed to solve challenging mathematical problems.
ToRA models significantly outperform open-source models on 10 mathematical reasoning datasets across all scales.
ToRA-Code-34B is the first open-source model that achieves an accuracy exceeding 50% on MATH.
arXiv Detail & Related papers (2023-09-29T17:59:38Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Artificial Intelligence in Concrete Materials: A Scientometric View [77.34726150561087]
This chapter aims to uncover the main research interests and knowledge structure of the existing literature on AI for concrete materials.
To begin with, a total of 389 journal articles published from 1990 to 2020 were retrieved from the Web of Science.
Scientometric tools such as keyword co-occurrence analysis and documentation co-citation analysis were adopted to quantify features and characteristics of the research field.
arXiv Detail & Related papers (2022-09-17T18:24:56Z) - Automatic Analysis of Linguistic Features in Journal Articles of
Different Academic Impacts with Feature Engineering Techniques [0.975434908987426]
This study attempts to extract micro-level linguistic features in high- and moderate-impact journal RAs, using feature engineering methods.
We extracted 25 highly relevant features from the Corpus of English Journal Articles through feature selection methods.
Results showed that 24 linguistic features such as the overlapping of content words between adjacent sentences, the use of third-person pronouns, auxiliary verbs, tense, emotional words provide consistent and accurate predictions for journal articles with different academic impacts.
arXiv Detail & Related papers (2021-11-15T03:56:50Z) - AI Explainability 360: Impact and Design [120.95633114160688]
In 2019, we created AI Explainability 360 (Arya et al. 2020), an open source software toolkit featuring ten diverse and state-of-the-art explainability methods.
This paper examines the impact of the toolkit with several case studies, statistics, and community feedback.
The paper also describes the flexible design of the toolkit, examples of its use, and the significant educational material and documentation available to its users.
arXiv Detail & Related papers (2021-09-24T19:17:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.