Technical Report on the Checkfor.ai AI-Generated Text Classifier
- URL: http://arxiv.org/abs/2402.14873v2
- Date: Mon, 26 Feb 2024 05:28:41 GMT
- Title: Technical Report on the Checkfor.ai AI-Generated Text Classifier
- Authors: Bradley Emi and Max Spero
- Abstract summary: CheckforAI is a transformer-based neural network trained to distinguish text written by large language models from text written by humans.
CheckforAI outperforms leading commercial AI detection tools with over 9 times lower error rates.
- Score: 0.17404865362620806
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present the CheckforAI text classifier, a transformer-based neural network
trained to distinguish text written by large language models from text written
by humans. CheckforAI outperforms zero-shot methods such as DetectGPT as well
as leading commercial AI detection tools with over 9 times lower error rates on
a comprehensive benchmark comprised of ten text domains (student writing,
creative writing, scientific writing, books, encyclopedias, news, email,
scientific papers, short-form Q&A) and 8 open- and closed-source large language
models. We propose a training algorithm, hard negative mining with synthetic
mirrors, that enables our classifier to achieve orders of magnitude lower false
positive rates on high-data domains such as reviews. Finally, we show that
CheckforAI is not biased against nonnative English speakers and generalizes to
domains and models unseen during training.
Related papers
- Are AI-Generated Text Detectors Robust to Adversarial Perturbations? [9.001160538237372]
Current detectors for AI-generated text (AIGT) lack robustness against adversarial perturbations.
This paper investigates the robustness of existing AIGT detection methods and introduces a novel detector, the Siamese Calibrated Reconstruction Network (SCRN)
The SCRN employs a reconstruction network to add and remove noise from text, extracting a semantic representation that is robust to local perturbations.
arXiv Detail & Related papers (2024-06-03T10:21:48Z) - Large Language Model (LLM) AI text generation detection based on transformer deep learning algorithm [0.9004420912552793]
A tool for detecting AI text generation is developed on the Transformer model.
Deep learning model combines layers such as LSTM, Transformer and CNN for text classification or sequence labelling tasks.
The model has 99% prediction accuracy for AI-generated text, with a precision of 0.99, a recall of 1, and an f1 score of 0.99, achieving a very high classification accuracy.
arXiv Detail & Related papers (2024-04-06T06:22:45Z) - Language Models for Text Classification: Is In-Context Learning Enough? [54.869097980761595]
Recent foundational language models have shown state-of-the-art performance in many NLP tasks in zero- and few-shot settings.
An advantage of these models over more standard approaches is the ability to understand instructions written in natural language (prompts)
This makes them suitable for addressing text classification problems for domains with limited amounts of annotated instances.
arXiv Detail & Related papers (2024-03-26T12:47:39Z) - Multiscale Positive-Unlabeled Detection of AI-Generated Texts [27.956604193427772]
Multiscale Positive-Unlabeled (MPU) training framework is proposed to address the difficulty of short-text detection.
MPU method augments detection performance on long AI-generated texts, and significantly improves short-text detection of language model detectors.
arXiv Detail & Related papers (2023-05-29T15:25:00Z) - Distinguishing Human Generated Text From ChatGPT Generated Text Using
Machine Learning [0.251657752676152]
This paper presents a machine learning-based solution that can identify the ChatGPT delivered text from the human written text.
We have tested the proposed model on a Kaggle dataset consisting of 10,000 texts out of which 5,204 texts were written by humans and collected from news and social media.
On the corpus generated by GPT-3.5, the proposed algorithm presents an accuracy of 77%.
arXiv Detail & Related papers (2023-05-26T09:27:43Z) - Smaller Language Models are Better Black-box Machine-Generated Text
Detectors [56.36291277897995]
Small and partially-trained models are better universal text detectors.
We find that whether the detector and generator were trained on the same data is not critically important to the detection success.
For instance, the OPT-125M model has an AUC of 0.81 in detecting ChatGPT generations, whereas a larger model from the GPT family, GPTJ-6B, has AUC of 0.45.
arXiv Detail & Related papers (2023-05-17T00:09:08Z) - Paraphrasing evades detectors of AI-generated text, but retrieval is an
effective defense [56.077252790310176]
We present a paraphrase generation model (DIPPER) that can paraphrase paragraphs, condition on surrounding context, and control lexical diversity and content reordering.
Using DIPPER to paraphrase text generated by three large language models (including GPT3.5-davinci-003) successfully evades several detectors, including watermarking.
We introduce a simple defense that relies on retrieving semantically-similar generations and must be maintained by a language model API provider.
arXiv Detail & Related papers (2023-03-23T16:29:27Z) - Verifying the Robustness of Automatic Credibility Assessment [79.08422736721764]
Text classification methods have been widely investigated as a way to detect content of low credibility.
In some cases insignificant changes in input text can mislead the models.
We introduce BODEGA: a benchmark for testing both victim models and attack methods on misinformation detection tasks.
arXiv Detail & Related papers (2023-03-14T16:11:47Z) - Lexically Aware Semi-Supervised Learning for OCR Post-Correction [90.54336622024299]
Much of the existing linguistic data in many languages of the world is locked away in non-digitized books and documents.
Previous work has demonstrated the utility of neural post-correction methods on recognition of less-well-resourced languages.
We present a semi-supervised learning method that makes it possible to utilize raw images to improve performance.
arXiv Detail & Related papers (2021-11-04T04:39:02Z) - AES Systems Are Both Overstable And Oversensitive: Explaining Why And
Proposing Defenses [66.49753193098356]
We investigate the reason behind the surprising adversarial brittleness of scoring models.
Our results indicate that autoscoring models, despite getting trained as "end-to-end" models, behave like bag-of-words models.
We propose detection-based protection models that can detect oversensitivity and overstability causing samples with high accuracies.
arXiv Detail & Related papers (2021-09-24T03:49:38Z) - Offline Handwritten Chinese Text Recognition with Convolutional Neural
Networks [5.984124397831814]
In this paper, we build the models using only the convolutional neural networks and use CTC as the loss function.
We achieve 6.81% character error rate (CER) on the ICDAR 2013 competition set, which is the best published result without language model correction.
arXiv Detail & Related papers (2020-06-28T14:34:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.