niksss at HinglishEval: Language-agnostic BERT-based Contextual
Embeddings with Catboost for Quality Evaluation of the Low-Resource
Synthetically Generated Code-Mixed Hinglish Text
- URL: http://arxiv.org/abs/2206.08910v1
- Date: Fri, 17 Jun 2022 17:36:03 GMT
- Title: niksss at HinglishEval: Language-agnostic BERT-based Contextual
Embeddings with Catboost for Quality Evaluation of the Low-Resource
Synthetically Generated Code-Mixed Hinglish Text
- Authors: Nikhil Singh
- Abstract summary: This paper describes the system description for the HinglishEval challenge at INLG 2022.
The goal of this task was to investigate the factors influencing the quality of the code-mixed text generation system.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper describes the system description for the HinglishEval challenge at
INLG 2022. The goal of this task was to investigate the factors influencing the
quality of the code-mixed text generation system. The task was divided into two
subtasks, quality rating prediction and annotators disagreement prediction of
the synthetic Hinglish dataset. We attempted to solve these tasks using
sentence-level embeddings, which are obtained from mean pooling the
contextualized word embeddings for all input tokens in our text. We
experimented with various classifiers on top of the embeddings produced for
respective tasks. Our best-performing system ranked 1st on subtask B and 3rd on
subtask A.
Related papers
- BERT or FastText? A Comparative Analysis of Contextual as well as Non-Contextual Embeddings [0.4194295877935868]
The choice of embeddings plays a critical role in enhancing the performance of NLP tasks.
In this study, we investigate the impact of various embedding techniques- Contextual BERT-based, Non-Contextual BERT-based, and FastText-based on NLP classification tasks specific to the Marathi language.
arXiv Detail & Related papers (2024-11-26T18:25:57Z) - Unify word-level and span-level tasks: NJUNLP's Participation for the
WMT2023 Quality Estimation Shared Task [59.46906545506715]
We introduce the NJUNLP team to the WMT 2023 Quality Estimation (QE) shared task.
Our team submitted predictions for the English-German language pair on all two sub-tasks.
Our models achieved the best results in English-German for both word-level and fine-grained error span detection sub-tasks.
arXiv Detail & Related papers (2023-09-23T01:52:14Z) - HIT-SCIR at MMNLU-22: Consistency Regularization for Multilingual Spoken
Language Understanding [56.756090143062536]
We propose to use consistency regularization based on a hybrid data augmentation strategy.
We conduct experiments on the MASSIVE dataset under both full-dataset and zero-shot settings.
Our proposed method improves the performance on both intent detection and slot filling tasks.
arXiv Detail & Related papers (2023-01-05T11:21:15Z) - SESCORE2: Learning Text Generation Evaluation via Synthesizing Realistic
Mistakes [93.19166902594168]
We propose SESCORE2, a self-supervised approach for training a model-based metric for text generation evaluation.
Key concept is to synthesize realistic model mistakes by perturbing sentences retrieved from a corpus.
We evaluate SESCORE2 and previous methods on four text generation tasks across three languages.
arXiv Detail & Related papers (2022-12-19T09:02:16Z) - Findings of the The RuATD Shared Task 2022 on Artificial Text Detection
in Russian [6.9244605050142995]
We present the shared task on artificial text detection in Russian, which is organized as a part of the Dialogue Evaluation initiative, held in 2022.
The dataset includes texts from 14 text generators, i.e., one human writer and 13 text generative models fine-tuned for one or more of the following generation tasks.
The human-written texts are collected from publicly available resources across multiple domains.
arXiv Detail & Related papers (2022-06-03T14:12:33Z) - drsphelps at SemEval-2022 Task 2: Learning idiom representations using
BERTRAM [0.0]
We modify a standard BERT transformer by adding embeddings for each idiom.
We show that this technique increases the quality of representations and leads to better performance on the task.
arXiv Detail & Related papers (2022-04-06T13:32:37Z) - Quality Evaluation of the Low-Resource Synthetically Generated
Code-Mixed Hinglish Text [1.6675267471157407]
We synthetically generate code-mixed Hinglish sentences using two distinct approaches.
We employ human annotators to rate the generation quality.
arXiv Detail & Related papers (2021-08-04T06:02:46Z) - Automated Concatenation of Embeddings for Structured Prediction [75.44925576268052]
We propose Automated Concatenation of Embeddings (ACE) to automate the process of finding better concatenations of embeddings for structured prediction tasks.
We follow strategies in reinforcement learning to optimize the parameters of the controller and compute the reward based on the accuracy of a task model.
arXiv Detail & Related papers (2020-10-10T14:03:20Z) - Abstractive Summarization of Spoken and Written Instructions with BERT [66.14755043607776]
We present the first application of the BERTSum model to conversational language.
We generate abstractive summaries of narrated instructional videos across a wide variety of topics.
We envision this integrated as a feature in intelligent virtual assistants, enabling them to summarize both written and spoken instructional content upon request.
arXiv Detail & Related papers (2020-08-21T20:59:34Z) - RUSSE'2020: Findings of the First Taxonomy Enrichment Task for the
Russian language [70.27072729280528]
This paper describes the results of the first shared task on taxonomy enrichment for the Russian language.
16 teams participated in the task demonstrating high results with more than half of them outperforming the provided baseline.
arXiv Detail & Related papers (2020-05-22T13:30:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.