Deep Learning Detection Method for Large Language Models-Generated
Scientific Content
- URL: http://arxiv.org/abs/2403.00828v1
- Date: Tue, 27 Feb 2024 19:16:39 GMT
- Title: Deep Learning Detection Method for Large Language Models-Generated
Scientific Content
- Authors: Bushra Alhijawi, Rawan Jarrar, Aseel AbuAlRub, and Arwa Bader
- Abstract summary: Large Language Models generate scientific content that is indistinguishable from that written by humans.
This research paper presents a novel ChatGPT-generated scientific text detection method, AI-Catcher.
On average, AI-Catcher improved accuracy by 37.4%.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Large Language Models (LLMs), such as GPT-3 and BERT, reshape how textual
content is written and communicated. These models have the potential to
generate scientific content that is indistinguishable from that written by
humans. Hence, LLMs carry severe consequences for the scientific community,
which relies on the integrity and reliability of publications. This research
paper presents a novel ChatGPT-generated scientific text detection method,
AI-Catcher. AI-Catcher integrates two deep learning models, multilayer
perceptron (MLP) and convolutional neural networks (CNN). The MLP learns the
feature representations of the linguistic and statistical features. The CNN
extracts high-level representations of the sequential patterns from the textual
content. AI-Catcher is a multimodal model that fuses hidden patterns derived
from MLP and CNN. In addition, a new ChatGPT-Generated scientific text dataset
is collected to enhance AI-generated text detection tools, AIGTxt. AIGTxt
contains 3000 records collected from published academic articles across ten
domains and divided into three classes: Human-written, ChatGPT-generated, and
Mixed text. Several experiments are conducted to evaluate the performance of
AI-Catcher. The comparative results demonstrate the capability of AI-Catcher to
distinguish between human-written and ChatGPT-generated scientific text more
accurately than alternative methods. On average, AI-Catcher improved accuracy
by 37.4%.
Related papers
- Detecting Machine-Generated Long-Form Content with Latent-Space Variables [54.07946647012579]
Existing zero-shot detectors primarily focus on token-level distributions, which are vulnerable to real-world domain shifts.
We propose a more robust method that incorporates abstract elements, such as event transitions, as key deciding factors to detect machine versus human texts.
arXiv Detail & Related papers (2024-10-04T18:42:09Z) - Distinguishing Chatbot from Human [1.1249583407496218]
We develop a new dataset consisting of more than 750,000 human-written paragraphs.
Based on this dataset, we apply Machine Learning (ML) techniques to determine the origin of text.
Our proposed solutions offer high classification accuracy and serve as useful tools for textual analysis.
arXiv Detail & Related papers (2024-08-03T13:18:04Z) - In-Context Language Learning: Architectures and Algorithms [73.93205821154605]
We study ICL through the lens of a new family of model problems we term in context language learning (ICLL)
We evaluate a diverse set of neural sequence models on regular ICLL tasks.
arXiv Detail & Related papers (2024-01-23T18:59:21Z) - Evaluating the Efficacy of Hybrid Deep Learning Models in Distinguishing
AI-Generated Text [0.0]
My research investigates the use of cutting-edge hybrid deep learning models to accurately differentiate between AI-generated text and human writing.
I applied a robust methodology, utilising a carefully selected dataset comprising AI and human texts from various sources, each tagged with instructions.
arXiv Detail & Related papers (2023-11-27T06:26:53Z) - The Imitation Game: Detecting Human and AI-Generated Texts in the Era of
ChatGPT and BARD [3.2228025627337864]
We introduce a novel dataset of human-written and AI-generated texts in different genres.
We employ several machine learning models to classify the texts.
Results demonstrate the efficacy of these models in discerning between human and AI-generated text.
arXiv Detail & Related papers (2023-07-22T21:00:14Z) - Distinguishing Human Generated Text From ChatGPT Generated Text Using
Machine Learning [0.251657752676152]
This paper presents a machine learning-based solution that can identify the ChatGPT delivered text from the human written text.
We have tested the proposed model on a Kaggle dataset consisting of 10,000 texts out of which 5,204 texts were written by humans and collected from news and social media.
On the corpus generated by GPT-3.5, the proposed algorithm presents an accuracy of 77%.
arXiv Detail & Related papers (2023-05-26T09:27:43Z) - To ChatGPT, or not to ChatGPT: That is the question! [78.407861566006]
This study provides a comprehensive and contemporary assessment of the most recent techniques in ChatGPT detection.
We have curated a benchmark dataset consisting of prompts from ChatGPT and humans, including diverse questions from medical, open Q&A, and finance domains.
Our evaluation results demonstrate that none of the existing methods can effectively detect ChatGPT-generated content.
arXiv Detail & Related papers (2023-04-04T03:04:28Z) - Paraphrasing evades detectors of AI-generated text, but retrieval is an
effective defense [56.077252790310176]
We present a paraphrase generation model (DIPPER) that can paraphrase paragraphs, condition on surrounding context, and control lexical diversity and content reordering.
Using DIPPER to paraphrase text generated by three large language models (including GPT3.5-davinci-003) successfully evades several detectors, including watermarking.
We introduce a simple defense that relies on retrieving semantically-similar generations and must be maintained by a language model API provider.
arXiv Detail & Related papers (2023-03-23T16:29:27Z) - Is This Abstract Generated by AI? A Research for the Gap between
AI-generated Scientific Text and Human-written Scientific Text [13.438933219811188]
We investigate the gap between scientific content generated by AI and written by humans.
We find that there exists a writing style'' gap between AI-generated scientific text and human-written scientific text.
arXiv Detail & Related papers (2023-01-24T04:23:20Z) - How much do language models copy from their training data? Evaluating
linguistic novelty in text generation using RAVEN [63.79300884115027]
Current language models can generate high-quality text.
Are they simply copying text they have seen before, or have they learned generalizable linguistic abstractions?
We introduce RAVEN, a suite of analyses for assessing the novelty of generated text.
arXiv Detail & Related papers (2021-11-18T04:07:09Z) - Artificial Text Detection via Examining the Topology of Attention Maps [58.46367297712477]
We propose three novel types of interpretable topological features for this task based on Topological Data Analysis (TDA)
We empirically show that the features derived from the BERT model outperform count- and neural-based baselines up to 10% on three common datasets.
The probing analysis of the features reveals their sensitivity to the surface and syntactic properties.
arXiv Detail & Related papers (2021-09-10T12:13:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.