How Large Language Models are Transforming Machine-Paraphrased
Plagiarism
- URL: http://arxiv.org/abs/2210.03568v1
- Date: Fri, 7 Oct 2022 14:08:57 GMT
- Title: How Large Language Models are Transforming Machine-Paraphrased
Plagiarism
- Authors: Jan Philip Wahle and Terry Ruas and Frederic Kirstein and Bela Gipp
- Abstract summary: This work explores T5 and GPT-3 for machine-paraphrase generation on scientific articles from arXiv, student theses, and Wikipedia.
We evaluate the detection performance of six automated solutions and one commercial plagiarism detection software.
Human experts rate the quality of paraphrases generated by GPT-3 as high as original texts.
- Score: 3.8768839735240737
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The recent success of large language models for text generation poses a
severe threat to academic integrity, as plagiarists can generate realistic
paraphrases indistinguishable from original work. However, the role of large
autoregressive transformers in generating machine-paraphrased plagiarism and
their detection is still developing in the literature. This work explores T5
and GPT-3 for machine-paraphrase generation on scientific articles from arXiv,
student theses, and Wikipedia. We evaluate the detection performance of six
automated solutions and one commercial plagiarism detection software and
perform a human study with 105 participants regarding their detection
performance and the quality of generated examples. Our results suggest that
large models can rewrite text humans have difficulty identifying as
machine-paraphrased (53% mean acc.). Human experts rate the quality of
paraphrases generated by GPT-3 as high as original texts (clarity 4.0/5,
fluency 4.2/5, coherence 3.8/5). The best-performing detection model (GPT-3)
achieves a 66% F1-score in detecting paraphrases.
Related papers
- Detecting Machine-Generated Long-Form Content with Latent-Space Variables [54.07946647012579]
Existing zero-shot detectors primarily focus on token-level distributions, which are vulnerable to real-world domain shifts.
We propose a more robust method that incorporates abstract elements, such as event transitions, as key deciding factors to detect machine versus human texts.
arXiv Detail & Related papers (2024-10-04T18:42:09Z) - PlagBench: Exploring the Duality of Large Language Models in Plagiarism Generation and Detection [26.191836276118696]
We introduce PlagBench, a comprehensive dataset consisting of 46.5K synthetic plagiarism cases generated using three instruction-tuned LLMs.
We then leverage our proposed dataset to evaluate the plagiarism detection performance of five modern LLMs and three specialized plagiarism checkers.
Our findings reveal that GPT-3.5 tends to generates paraphrases and summaries of higher quality compared to Llama2 and GPT-4.
arXiv Detail & Related papers (2024-06-24T03:29:53Z) - BERT-Enhanced Retrieval Tool for Homework Plagiarism Detection System [0.0]
We propose a plagiarized text data generation method based on GPT-3.5, which produces 32,927 pairs of text plagiarism detection datasets.
We also propose a plagiarism identification method based on Faiss with BERT with high efficiency and high accuracy.
Our experiments show that the performance of this model outperforms other models in several metrics, including 98.86%, 98.90%, 98.86%, and 0.9888 for Accuracy, Precision, Recall, and F1 Score.
arXiv Detail & Related papers (2024-04-01T12:20:34Z) - Forging the Forger: An Attempt to Improve Authorship Verification via Data Augmentation [52.72682366640554]
Authorship Verification (AV) is a text classification task concerned with inferring whether a candidate text has been written by one specific author or by someone else.
It has been shown that many AV systems are vulnerable to adversarial attacks, where a malicious author actively tries to fool the classifier by either concealing their writing style, or by imitating the style of another author.
arXiv Detail & Related papers (2024-03-17T16:36:26Z) - Smaller Language Models are Better Black-box Machine-Generated Text
Detectors [56.36291277897995]
Small and partially-trained models are better universal text detectors.
We find that whether the detector and generator were trained on the same data is not critically important to the detection success.
For instance, the OPT-125M model has an AUC of 0.81 in detecting ChatGPT generations, whereas a larger model from the GPT family, GPTJ-6B, has AUC of 0.45.
arXiv Detail & Related papers (2023-05-17T00:09:08Z) - RL4F: Generating Natural Language Feedback with Reinforcement Learning
for Repairing Model Outputs [27.777809444120827]
Previous work proposed providing language models with natural language feedback to guide them in repairing their outputs.
We introduce RL4F, a multi-agent collaborative framework where critique generator is trained to maximize end-task performance of GPT-3.
We show relative improvements up to 10% in multiple text similarity metrics over other learned, retrieval-augmented or prompting-based critique generators.
arXiv Detail & Related papers (2023-05-15T17:57:16Z) - Paraphrasing evades detectors of AI-generated text, but retrieval is an
effective defense [56.077252790310176]
We present a paraphrase generation model (DIPPER) that can paraphrase paragraphs, condition on surrounding context, and control lexical diversity and content reordering.
Using DIPPER to paraphrase text generated by three large language models (including GPT3.5-davinci-003) successfully evades several detectors, including watermarking.
We introduce a simple defense that relies on retrieving semantically-similar generations and must be maintained by a language model API provider.
arXiv Detail & Related papers (2023-03-23T16:29:27Z) - Elaboration-Generating Commonsense Question Answering at Scale [77.96137534751445]
In question answering requiring common sense, language models (e.g., GPT-3) have been used to generate text expressing background knowledge.
We finetune smaller language models to generate useful intermediate context, referred to here as elaborations.
Our framework alternates between updating two language models -- an elaboration generator and an answer predictor -- allowing each to influence the other.
arXiv Detail & Related papers (2022-09-02T18:32:09Z) - How much do language models copy from their training data? Evaluating
linguistic novelty in text generation using RAVEN [63.79300884115027]
Current language models can generate high-quality text.
Are they simply copying text they have seen before, or have they learned generalizable linguistic abstractions?
We introduce RAVEN, a suite of analyses for assessing the novelty of generated text.
arXiv Detail & Related papers (2021-11-18T04:07:09Z) - Identifying Machine-Paraphrased Plagiarism [5.353051766771479]
We evaluate the effectiveness of five pre-trained word embedding models combined with machine learning and state-of-the-art neural language models.
We paraphrased research papers, graduation theses, and Wikipedia articles.
To facilitate future research, all data, code, and two web applications our contributions are openly available.
arXiv Detail & Related papers (2021-03-22T14:54:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.