Team QUST at SemEval-2024 Task 8: A Comprehensive Study of Monolingual
and Multilingual Approaches for Detecting AI-generated Text
- URL: http://arxiv.org/abs/2402.11934v1
- Date: Mon, 19 Feb 2024 08:22:51 GMT
- Title: Team QUST at SemEval-2024 Task 8: A Comprehensive Study of Monolingual
and Multilingual Approaches for Detecting AI-generated Text
- Authors: Xiaoman Xu, Xiangrun Li, Taihang Wang, Jianxiang Tian, Ye Jiang
- Abstract summary: This paper presents the participation of team QUST in Task 8 SemEval 2024.
We first performed data augmentation and cleaning on the dataset to enhance model training efficiency and accuracy.
In the monolingual task, we evaluated traditional deep-learning methods, multiscale positive-unlabeled framework (MPU), fine-tuning, adapters and ensemble methods.
- Score: 0.1499944454332829
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents the participation of team QUST in Task 8 SemEval 2024. We
first performed data augmentation and cleaning on the dataset to enhance model
training efficiency and accuracy. In the monolingual task, we evaluated
traditional deep-learning methods, multiscale positive-unlabeled framework
(MPU), fine-tuning, adapters and ensemble methods. Then, we selected the
top-performing models based on their accuracy from the monolingual models and
evaluated them in subtasks A and B. The final model construction employed a
stacking ensemble that combined fine-tuning with MPU. Our system achieved 8th
(scored 8th in terms of accuracy, officially ranked 13th) place in the official
test set in multilingual settings of subtask A. We release our system code
at:https://github.com/warmth27/SemEval2024_QUST
Related papers
- PetKaz at SemEval-2024 Task 8: Can Linguistics Capture the Specifics of LLM-generated Text? [4.463184061618504]
We present our submission to the SemEval-2024 Task 8 "Multigenerator, Multidomain, and Black-Box Machine-Generated Text Detection"
Our approach relies on combining embeddings from the RoBERTa-base with diversity features and uses a resampled training set.
Our results show that our approach is generalizable across unseen models and domains, achieving an accuracy of 0.91.
arXiv Detail & Related papers (2024-04-08T13:05:02Z) - KInIT at SemEval-2024 Task 8: Fine-tuned LLMs for Multilingual Machine-Generated Text Detection [0.0]
SemEval-2024 Task 8 is focused on multigenerator, multidomain, and multilingual black-box machine-generated text detection.
Our submitted method achieved competitive results, ranking at the fourth place, just under 1 percentage point behind the winner.
arXiv Detail & Related papers (2024-02-21T10:09:56Z) - Strategies for improving low resource speech to text translation relying
on pre-trained ASR models [59.90106959717875]
This paper presents techniques and findings for improving the performance of low-resource speech to text translation (ST)
We conducted experiments on both simulated and real-low resource setups, on language pairs English - Portuguese, and Tamasheq - French respectively.
arXiv Detail & Related papers (2023-05-31T21:58:07Z) - Team QUST at SemEval-2023 Task 3: A Comprehensive Study of Monolingual
and Multilingual Approaches for Detecting Online News Genre, Framing and
Persuasion Techniques [0.030458514384586396]
This paper describes the participation of team QUST in the SemEval2023 task 3.
The monolingual models are first evaluated with the under-sampling of the majority classes.
The pre-trained multilingual model is fine-tuned with a combination of the class weights and the sample weights.
arXiv Detail & Related papers (2023-04-09T08:14:01Z) - Ensemble Transfer Learning for Multilingual Coreference Resolution [60.409789753164944]
A problem that frequently occurs when working with a non-English language is the scarcity of annotated training data.
We design a simple but effective ensemble-based framework that combines various transfer learning techniques.
We also propose a low-cost TL method that bootstraps coreference resolution models by utilizing Wikipedia anchor texts.
arXiv Detail & Related papers (2023-01-22T18:22:55Z) - Adapted Multimodal BERT with Layer-wise Fusion for Sentiment Analysis [84.12658971655253]
We propose Adapted Multimodal BERT, a BERT-based architecture for multimodal tasks.
adapter adjusts the pretrained language model for the task at hand, while the fusion layers perform task-specific, layer-wise fusion of audio-visual information with textual BERT representations.
In our ablations we see that this approach leads to efficient models, that can outperform their fine-tuned counterparts and are robust to input noise.
arXiv Detail & Related papers (2022-12-01T17:31:42Z) - BJTU-WeChat's Systems for the WMT22 Chat Translation Task [66.81525961469494]
This paper introduces the joint submission of the Beijing Jiaotong University and WeChat AI to the WMT'22 chat translation task for English-German.
Based on the Transformer, we apply several effective variants.
Our systems achieve 0.810 and 0.946 COMET scores.
arXiv Detail & Related papers (2022-11-28T02:35:04Z) - Alibaba-Translate China's Submission for WMT 2022 Quality Estimation
Shared Task [80.22825549235556]
We present our submission to the sentence-level MQM benchmark at Quality Estimation Shared Task, named UniTE.
Specifically, our systems employ the framework of UniTE, which combined three types of input formats during training with a pre-trained language model.
Results show that our models reach 1st overall ranking in the Multilingual and English-Russian settings, and 2nd overall ranking in English-German and Chinese-English settings.
arXiv Detail & Related papers (2022-10-18T08:55:27Z) - HFL at SemEval-2022 Task 8: A Linguistics-inspired Regression Model with
Data Augmentation for Multilingual News Similarity [16.454545004093735]
This paper describes our system designed for SemEval-2022 Task 8: Multilingual News Article Similarity.
We proposed a linguistics-inspired model trained with a few task-specific strategies.
Our system ranked 1st on the leaderboard while achieving a Pearson's Correlation Coefficient of 0.818 on the official evaluation set.
arXiv Detail & Related papers (2022-04-11T03:08:37Z) - Mixed-Lingual Pre-training for Cross-lingual Summarization [54.4823498438831]
Cross-lingual Summarization aims at producing a summary in the target language for an article in the source language.
We propose a solution based on mixed-lingual pre-training that leverages both cross-lingual tasks like translation and monolingual tasks like masked language models.
Our model achieves an improvement of 2.82 (English to Chinese) and 1.15 (Chinese to English) ROUGE-1 scores over state-of-the-art results.
arXiv Detail & Related papers (2020-10-18T00:21:53Z) - BUT-FIT at SemEval-2020 Task 4: Multilingual commonsense [1.433758865948252]
This paper describes work of the BUT-FIT's team at SemEval 2020 Task 4 - Commonsense Validation and Explanation.
In subtasks A and B, our submissions are based on pretrained language representation models (namely ALBERT) and data augmentation.
We experimented with solving the task for another language, Czech, by means of multilingual models and machine translated dataset.
We show that with a strong machine translation system, our system can be used in another language with a small accuracy loss.
arXiv Detail & Related papers (2020-08-17T12:45:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.