An Ensemble Method Based on the Combination of Transformers with
Convolutional Neural Networks to Detect Artificially Generated Text
- URL: http://arxiv.org/abs/2310.17312v1
- Date: Thu, 26 Oct 2023 11:17:03 GMT
- Title: An Ensemble Method Based on the Combination of Transformers with
Convolutional Neural Networks to Detect Artificially Generated Text
- Authors: Vijini Liyanage and Davide Buscaldi
- Abstract summary: We present some classification models constructed by ensembling transformer models such as Sci-BERT, DeBERTa and XLNet, with Convolutional Neural Networks (CNNs)
Our experiments demonstrate that the considered ensemble architectures surpass the performance of the individual transformer models for classification.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Thanks to the state-of-the-art Large Language Models (LLMs), language
generation has reached outstanding levels. These models are capable of
generating high quality content, thus making it a challenging task to detect
generated text from human-written content. Despite the advantages provided by
Natural Language Generation, the inability to distinguish automatically
generated text can raise ethical concerns in terms of authenticity.
Consequently, it is important to design and develop methodologies to detect
artificial content. In our work, we present some classification models
constructed by ensembling transformer models such as Sci-BERT, DeBERTa and
XLNet, with Convolutional Neural Networks (CNNs). Our experiments demonstrate
that the considered ensemble architectures surpass the performance of the
individual transformer models for classification. Furthermore, the proposed
SciBERT-CNN ensemble model produced an F1-score of 98.36% on the ALTA shared
task 2023 data.
Related papers
- Benchmarking Generative AI Models for Deep Learning Test Input Generation [6.674615464230326]
Test Input Generators (TIGs) are crucial to assess the ability of Deep Learning (DL) image classifiers to provide correct predictions for inputs beyond their training and test sets.
Recent advancements in Generative AI (GenAI) models have made them a powerful tool for creating and manipulating synthetic images.
We benchmark and combine different GenAI models with TIGs, assessing their effectiveness, efficiency, and quality of the generated test images.
arXiv Detail & Related papers (2024-12-23T15:30:42Z) - Revisiting N-Gram Models: Their Impact in Modern Neural Networks for Handwritten Text Recognition [4.059708117119894]
This study addresses whether explicit language models, specifically n-gram models, still contribute to the performance of state-of-the-art deep learning architectures in the field of handwriting recognition.
We evaluate two prominent neural network architectures, PyLaia and DAN, with and without the integration of explicit n-gram language models.
The results show that incorporating character or subword n-gram models significantly improves the performance of ATR models on all datasets.
arXiv Detail & Related papers (2024-04-30T07:37:48Z) - In-Context Language Learning: Architectures and Algorithms [73.93205821154605]
We study ICL through the lens of a new family of model problems we term in context language learning (ICLL)
We evaluate a diverse set of neural sequence models on regular ICLL tasks.
arXiv Detail & Related papers (2024-01-23T18:59:21Z) - An Ensemble Approach to Question Classification: Integrating Electra
Transformer, GloVe, and LSTM [0.0]
This study presents an innovative ensemble approach for question classification, combining the strengths of Electra, GloVe, and LSTM models.
Rigorously tested on the well-regarded TREC dataset, the model demonstrates how the integration of these disparate technologies can lead to superior results.
arXiv Detail & Related papers (2023-08-13T18:14:10Z) - Transformer-based approaches to Sentiment Detection [55.41644538483948]
We examined the performance of four different types of state-of-the-art transformer models for text classification.
The RoBERTa transformer model performs best on the test dataset with a score of 82.6% and is highly recommended for quality predictions.
arXiv Detail & Related papers (2023-03-13T17:12:03Z) - RoBERTa-wwm-ext Fine-Tuning for Chinese Text Classification [5.71097144710995]
Bidirectional Representations from Transformers (BERT) have shown to be a promising way to dramatically improve the performance across various Natural Language Processing tasks.
In this project, RoBERTa-wwm-ext pre-train language model was adopted and fine-tuned for Chinese text classification.
Models were able to classify Chinese texts into two categories, containing descriptions of legal behavior and descriptions of illegal behavior.
arXiv Detail & Related papers (2021-02-24T18:57:57Z) - GTAE: Graph-Transformer based Auto-Encoders for Linguistic-Constrained
Text Style Transfer [119.70961704127157]
Non-parallel text style transfer has attracted increasing research interests in recent years.
Current approaches still lack the ability to preserve the content and even logic of original sentences.
We propose a method called Graph Transformer based Auto-GTAE, which models a sentence as a linguistic graph and performs feature extraction and style transfer at the graph level.
arXiv Detail & Related papers (2021-02-01T11:08:45Z) - Controllable Text Generation with Focused Variation [71.07811310799664]
Focused-Variation Network (FVN) is a novel model to control language generation.
FVN learns disjoint discrete latent spaces for each attribute inside codebooks, which allows for both controllability and diversity.
We evaluate FVN on two text generation datasets with annotated content and style, and show state-of-the-art performance as assessed by automatic and human evaluations.
arXiv Detail & Related papers (2020-09-25T06:31:06Z) - Conformer: Convolution-augmented Transformer for Speech Recognition [60.119604551507805]
Recently Transformer and Convolution neural network (CNN) based models have shown promising results in Automatic Speech Recognition (ASR)
We propose the convolution-augmented transformer for speech recognition, named Conformer.
On the widely used LibriSpeech benchmark, our model achieves WER of 2.1%/4.3% without using a language model and 1.9%/3.9% with an external language model on test/testother.
arXiv Detail & Related papers (2020-05-16T20:56:25Z) - Improve Variational Autoencoder for Text Generationwith Discrete Latent
Bottleneck [52.08901549360262]
Variational autoencoders (VAEs) are essential tools in end-to-end representation learning.
VAEs tend to ignore latent variables with a strong auto-regressive decoder.
We propose a principled approach to enforce an implicit latent feature matching in a more compact latent space.
arXiv Detail & Related papers (2020-04-22T14:41:37Z) - Abstractive Text Summarization based on Language Model Conditioning and
Locality Modeling [4.525267347429154]
We train a Transformer-based neural model on the BERT language model.
In addition, we propose a new method of BERT-windowing, which allows chunk-wise processing of texts longer than the BERT window size.
The results of our models are compared to a baseline and the state-of-the-art models on the CNN/Daily Mail dataset.
arXiv Detail & Related papers (2020-03-29T14:00:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.