LMN at SemEval-2022 Task 11: A Transformer-based System for English
Named Entity Recognition
- URL: http://arxiv.org/abs/2203.03546v1
- Date: Sun, 13 Feb 2022 05:46:14 GMT
- Title: LMN at SemEval-2022 Task 11: A Transformer-based System for English
Named Entity Recognition
- Authors: Ngoc Minh Lai
- Abstract summary: We present our participation in the English track of SemEval-2022 Task 11: Multilingual Complex Named Entity Recognition.
Inspired by the recent advances in pretrained Transformer language models, we propose a simple yet effective Transformer-based baseline for the task.
Our proposed approach shows competitive results in the leaderboard as we ranked 12 over 30 teams.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Processing complex and ambiguous named entities is a challenging research
problem, but it has not received sufficient attention from the natural language
processing community. In this short paper, we present our participation in the
English track of SemEval-2022 Task 11: Multilingual Complex Named Entity
Recognition. Inspired by the recent advances in pretrained Transformer language
models, we propose a simple yet effective Transformer-based baseline for the
task. Despite its simplicity, our proposed approach shows competitive results
in the leaderboard as we ranked 12 over 30 teams. Our system achieved a macro
F1 score of 72.50% on the held-out test set. We have also explored a data
augmentation approach using entity linking. While the approach does not improve
the final performance, we also discuss it in this paper.
Related papers
- Sharif-MGTD at SemEval-2024 Task 8: A Transformer-Based Approach to Detect Machine Generated Text [2.2039952888743253]
MGT has emerged as a significant area of study within Natural Language Processing.
In this research, we explore the effectiveness of fine-tuning a RoBERTa-base transformer, a powerful neural architecture, to address MGT detection.
Our proposed system achieves an accuracy of 78.9% on the test dataset, positioning us at 57th among participants.
arXiv Detail & Related papers (2024-07-16T14:33:01Z) - IITK at SemEval-2024 Task 1: Contrastive Learning and Autoencoders for Semantic Textual Relatedness in Multilingual Texts [4.78482610709922]
This paper describes our system developed for the SemEval-2024 Task 1: Semantic Textual Relatedness.
The challenge is focused on automatically detecting the degree of relatedness between pairs of sentences for 14 languages.
arXiv Detail & Related papers (2024-04-06T05:58:42Z) - AAdaM at SemEval-2024 Task 1: Augmentation and Adaptation for Multilingual Semantic Textual Relatedness [16.896143197472114]
This paper presents our system developed for the SemEval-2024 Task 1: Semantic Textual Relatedness for African and Asian languages.
We propose using machine translation for data augmentation to address the low-resource challenge of limited training data.
We achieve competitive results in the shared task: our system performs the best among all ranked teams in both subtask A (supervised learning) and subtask C (cross-lingual transfer)
arXiv Detail & Related papers (2024-04-01T21:21:15Z) - Bag of Tricks for Effective Language Model Pretraining and Downstream
Adaptation: A Case Study on GLUE [93.98660272309974]
This report briefly describes our submission Vega v1 on the General Language Understanding Evaluation leaderboard.
GLUE is a collection of nine natural language understanding tasks, including question answering, linguistic acceptability, sentiment analysis, text similarity, paraphrase detection, and natural language inference.
With our optimized pretraining and fine-tuning strategies, our 1.3 billion model sets new state-of-the-art on 4/9 tasks, achieving the best average score of 91.3.
arXiv Detail & Related papers (2023-02-18T09:26:35Z) - Toward Efficient Language Model Pretraining and Downstream Adaptation
via Self-Evolution: A Case Study on SuperGLUE [203.65227947509933]
This report describes our JDExplore d-team's Vega v2 submission on the SuperGLUE leaderboard.
SuperGLUE is more challenging than the widely used general language understanding evaluation (GLUE) benchmark, containing eight difficult language understanding tasks.
arXiv Detail & Related papers (2022-12-04T15:36:18Z) - BJTU-WeChat's Systems for the WMT22 Chat Translation Task [66.81525961469494]
This paper introduces the joint submission of the Beijing Jiaotong University and WeChat AI to the WMT'22 chat translation task for English-German.
Based on the Transformer, we apply several effective variants.
Our systems achieve 0.810 and 0.946 COMET scores.
arXiv Detail & Related papers (2022-11-28T02:35:04Z) - USTC-NELSLIP at SemEval-2022 Task 11: Gazetteer-Adapted Integration
Network for Multilingual Complex Named Entity Recognition [41.26523047041553]
This paper describes the system developed by the USTC-NELSLIP team for SemEval-2022 Task 11 Multilingual Complex Named Entities Recognition (MultiCoNER)
We propose a gazetteer-adapted integration network (GAIN) to improve the performance of language models for recognizing complex named entities.
arXiv Detail & Related papers (2022-03-07T09:05:37Z) - DAMO-NLP at SemEval-2022 Task 11: A Knowledge-based System for
Multilingual Named Entity Recognition [94.1865071914727]
MultiCoNER aims at detecting semantically ambiguous named entities in short and low-context settings for multiple languages.
Our team DAMO-NLP proposes a knowledge-based system, where we build a multilingual knowledge base based on Wikipedia.
Given an input sentence, our system effectively retrieves related contexts from the knowledge base.
Our system wins 10 out of 13 tracks in the MultiCoNER shared task.
arXiv Detail & Related papers (2022-03-01T15:29:35Z) - UPB at SemEval-2020 Task 9: Identifying Sentiment in Code-Mixed Social
Media Texts using Transformers and Multi-Task Learning [1.7196613099537055]
We describe the systems developed by our team for SemEval-2020 Task 9.
We aim to cover two well-known code-mixed languages: Hindi-English and Spanish-English.
Our approach achieves promising performance on the Hindi-English task, with an average F1-score of 0.6850.
For the Spanish-English task, we obtained an average F1-score of 0.7064 ranking our team 17th out of 29 participants.
arXiv Detail & Related papers (2020-09-06T17:19:18Z) - MC-BERT: Efficient Language Pre-Training via a Meta Controller [96.68140474547602]
Large-scale pre-training is computationally expensive.
ELECTRA, an early attempt to accelerate pre-training, trains a discriminative model that predicts whether each input token was replaced by a generator.
We propose a novel meta-learning framework, MC-BERT, to achieve better efficiency and effectiveness.
arXiv Detail & Related papers (2020-06-10T09:22:19Z) - Kungfupanda at SemEval-2020 Task 12: BERT-Based Multi-Task Learning for
Offensive Language Detection [55.445023584632175]
We build an offensive language detection system, which combines multi-task learning with BERT-based models.
Our model achieves 91.51% F1 score in English Sub-task A, which is comparable to the first place.
arXiv Detail & Related papers (2020-04-28T11:27:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.