Delving into the Utilisation of ChatGPT in Scientific Publications in Astronomy
- URL: http://arxiv.org/abs/2406.17324v2
- Date: Fri, 6 Sep 2024 13:52:57 GMT
- Title: Delving into the Utilisation of ChatGPT in Scientific Publications in Astronomy
- Authors: Simone Astarita, Sandor Kruk, Jan Reerink, Pablo Gómez,
- Abstract summary: We show that ChatGPT uses more often than humans when generating academic text and search a total of 1 million articles for them.
We identify a list of words favoured by ChatGPT and find a statistically significant increase for these words against a control group in 2024.
These results suggest a widespread adoption of these models in the writing of astronomy papers.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Rapid progress in the capabilities of machine learning approaches in natural language processing has culminated in the rise of large language models over the last two years. Recent works have shown unprecedented adoption of these for academic writing, especially in some fields, but their pervasiveness in astronomy has not been studied sufficiently. To remedy this, we extract words that ChatGPT uses more often than humans when generating academic text and search a total of 1 million articles for them. This way, we assess the frequency of word occurrence in published works in astronomy tracked by the NASA Astrophysics Data System since 2000. We then perform a statistical analysis of the occurrences. We identify a list of words favoured by ChatGPT and find a statistically significant increase for these words against a control group in 2024, which matches the trend in other disciplines. These results suggest a widespread adoption of these models in the writing of astronomy papers. We encourage organisations, publishers, and researchers to work together to identify ethical and pragmatic guidelines to maximise the benefits of these systems while maintaining scientific rigour.
Related papers
- pathfinder: A Semantic Framework for Literature Review and Knowledge Discovery in Astronomy [2.6952253149772996]
Pathfinder is a machine learning framework designed to enable literature review and knowledge discovery in astronomy.
Our framework couples advanced retrieval techniques with LLM-based synthesis to search astronomical literature by semantic context.
It addresses complexities of jargon, named entities, and temporal aspects through time-based and citation-based weighting schemes.
arXiv Detail & Related papers (2024-08-02T20:05:24Z) - Inclusivity in Large Language Models: Personality Traits and Gender Bias in Scientific Abstracts [49.97673761305336]
We evaluate three large language models (LLMs) for their alignment with human narrative styles and potential gender biases.
Our findings indicate that, while these models generally produce text closely resembling human authored content, variations in stylistic features suggest significant gender biases.
arXiv Detail & Related papers (2024-06-27T19:26:11Z) - Delving into ChatGPT usage in academic writing through excess vocabulary [4.58733012283457]
Large language models (LLMs) can generate and revise text with human-level performance.
Yet, many scientists have been using them to assist their scholarly writing.
We study vocabulary changes in 14 million PubMed abstracts from 2010-2024, and show how the appearance of LLMs led to an abrupt increase in the frequency of certain style words.
arXiv Detail & Related papers (2024-06-11T07:16:34Z) - DocReLM: Mastering Document Retrieval with Language Model [49.847369507694154]
We demonstrate that by utilizing large language models, a document retrieval system can achieve advanced semantic understanding capabilities.
Our approach involves training the retriever and reranker using domain-specific data generated by large language models.
We use a test set annotated by academic researchers in the fields of quantum physics and computer vision to evaluate our system's performance.
arXiv Detail & Related papers (2024-05-19T06:30:22Z) - Mapping the Increasing Use of LLMs in Scientific Papers [99.67983375899719]
We conduct the first systematic, large-scale analysis across 950,965 papers published between January 2020 and February 2024 on the arXiv, bioRxiv, and Nature portfolio journals.
Our findings reveal a steady increase in LLM usage, with the largest and fastest growth observed in Computer Science papers.
arXiv Detail & Related papers (2024-04-01T17:45:15Z) - Large Language Models for Scientific Synthesis, Inference and
Explanation [56.41963802804953]
We show how large language models can perform scientific synthesis, inference, and explanation.
We show that the large language model can augment this "knowledge" by synthesizing from the scientific literature.
This approach has the further advantage that the large language model can explain the machine learning system's predictions.
arXiv Detail & Related papers (2023-10-12T02:17:59Z) - Harnessing the Power of Adversarial Prompting and Large Language Models
for Robust Hypothesis Generation in Astronomy [0.0]
We employ in-context prompting, supplying the model with up to 1000 papers from the NASA Astrophysics Data System.
Our findings point towards a substantial boost in hypothesis generation when using in-context prompting.
We illustrate how adversarial prompting empowers GPT-4 to extract essential details from a vast knowledge base to produce meaningful hypotheses.
arXiv Detail & Related papers (2023-06-20T16:16:56Z) - Change Summarization of Diachronic Scholarly Paper Collections by
Semantic Evolution Analysis [10.554831859741851]
We demonstrate a novel approach to analyze the collections of research papers published over longer time periods.
Our approach is based on comparing word semantic representations over time and aims to support users in a better understanding of large domain-focused archives of scholarly publications.
arXiv Detail & Related papers (2021-12-07T11:15:19Z) - Automatic Analysis of Linguistic Features in Journal Articles of
Different Academic Impacts with Feature Engineering Techniques [0.975434908987426]
This study attempts to extract micro-level linguistic features in high- and moderate-impact journal RAs, using feature engineering methods.
We extracted 25 highly relevant features from the Corpus of English Journal Articles through feature selection methods.
Results showed that 24 linguistic features such as the overlapping of content words between adjacent sentences, the use of third-person pronouns, auxiliary verbs, tense, emotional words provide consistent and accurate predictions for journal articles with different academic impacts.
arXiv Detail & Related papers (2021-11-15T03:56:50Z) - Semantic Analysis for Automated Evaluation of the Potential Impact of
Research Articles [62.997667081978825]
This paper presents a novel method for vector representation of text meaning based on information theory.
We show how this informational semantics is used for text classification on the basis of the Leicester Scientific Corpus.
We show that an informational approach to representing the meaning of a text has offered a way to effectively predict the scientific impact of research papers.
arXiv Detail & Related papers (2021-04-26T20:37:13Z) - Informational Space of Meaning for Scientific Texts [68.8204255655161]
We introduce the Meaning Space, in which the meaning of a word is represented by a vector of Relative Information Gain (RIG) about the subject categories that the text belongs to.
This new approach is applied to construct the Meaning Space based on Leicester Scientific Corpus (LSC) and Leicester Scientific Dictionary-Core (LScDC)
The most informative words are presented for 252 categories. The proposed model based on RIG is shown to have ability to stand out topic-specific words in categories.
arXiv Detail & Related papers (2020-04-28T14:26:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.