drsphelps at SemEval-2022 Task 2: Learning idiom representations using
BERTRAM
- URL: http://arxiv.org/abs/2204.02821v2
- Date: Thu, 7 Apr 2022 15:17:05 GMT
- Title: drsphelps at SemEval-2022 Task 2: Learning idiom representations using
BERTRAM
- Authors: Dylan Phelps
- Abstract summary: We modify a standard BERT transformer by adding embeddings for each idiom.
We show that this technique increases the quality of representations and leads to better performance on the task.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper describes our system for SemEval-2022 Task 2 Multilingual
Idiomaticity Detection and Sentence Embedding sub-task B. We modify a standard
BERT sentence transformer by adding embeddings for each idioms, which are
created using BERTRAM and a small number of contexts. We show that this
technique increases the quality of idiom representations and leads to better
performance on the task. We also perform analysis on our final results and show
that the quality of the produced idiom embeddings is highly sensitive to the
quality of the input contexts.
Related papers
- BERT or FastText? A Comparative Analysis of Contextual as well as Non-Contextual Embeddings [0.4194295877935868]
The choice of embeddings plays a critical role in enhancing the performance of NLP tasks.
In this study, we investigate the impact of various embedding techniques- Contextual BERT-based, Non-Contextual BERT-based, and FastText-based on NLP classification tasks specific to the Marathi language.
arXiv Detail & Related papers (2024-11-26T18:25:57Z) - Unify word-level and span-level tasks: NJUNLP's Participation for the
WMT2023 Quality Estimation Shared Task [59.46906545506715]
We introduce the NJUNLP team to the WMT 2023 Quality Estimation (QE) shared task.
Our team submitted predictions for the English-German language pair on all two sub-tasks.
Our models achieved the best results in English-German for both word-level and fine-grained error span detection sub-tasks.
arXiv Detail & Related papers (2023-09-23T01:52:14Z) - niksss at HinglishEval: Language-agnostic BERT-based Contextual
Embeddings with Catboost for Quality Evaluation of the Low-Resource
Synthetically Generated Code-Mixed Hinglish Text [0.0]
This paper describes the system description for the HinglishEval challenge at INLG 2022.
The goal of this task was to investigate the factors influencing the quality of the code-mixed text generation system.
arXiv Detail & Related papers (2022-06-17T17:36:03Z) - kpfriends at SemEval-2022 Task 2: NEAMER -- Named Entity Augmented
Multi-word Expression Recognizer [0.6091702876917281]
This system is inspired by non-compositionality characteristics shared between Named Entity and idiomatic expressions.
We achieve SOTA with F1 0.9395 during post-evaluation phase and observe improvement in training stability.
Lastly, we experiment with non-compositionality knowledge transfer, cross-lingual fine-tuning and locality features.
arXiv Detail & Related papers (2022-04-17T22:58:33Z) - PromptBERT: Improving BERT Sentence Embeddings with Prompts [95.45347849834765]
We propose a prompt based sentence embeddings method which can reduce token embeddings biases and make the original BERT layers more effective.
We also propose a novel unsupervised training objective by the technology of template denoising, which substantially shortens the performance gap between the supervised and unsupervised setting.
Our fine-tuned method outperforms the state-of-the-art method SimCSE in both unsupervised and supervised settings.
arXiv Detail & Related papers (2022-01-12T06:54:21Z) - UoB at SemEval-2020 Task 12: Boosting BERT with Corpus Level Information [0.6980076213134383]
We test the effectiveness of integrating Term Frequency-Inverse Document Frequency (TF-IDF) with BERT on the task of identifying abuse on social media.
We achieve a score within two points of the top performing team and in Sub-Task B (target detection) wherein we are ranked 4 of the 44 participating teams.
arXiv Detail & Related papers (2020-08-19T16:47:15Z) - Syntactic Structure Distillation Pretraining For Bidirectional Encoders [49.483357228441434]
We introduce a knowledge distillation strategy for injecting syntactic biases into BERT pretraining.
We distill the approximate marginal distribution over words in context from the syntactic LM.
Our findings demonstrate the benefits of syntactic biases, even in representation learners that exploit large amounts of data.
arXiv Detail & Related papers (2020-05-27T16:44:01Z) - BURT: BERT-inspired Universal Representation from Twin Structure [89.82415322763475]
BURT (BERT inspired Universal Representation from Twin Structure) is capable of generating universal, fixed-size representations for input sequences of any granularity.
Our proposed BURT adopts the Siamese network, learning sentence-level representations from natural language inference dataset and word/phrase-level representations from paraphrasing dataset.
We evaluate BURT across different granularities of text similarity tasks, including STS tasks, SemEval2013 Task 5(a) and some commonly used word similarity tasks.
arXiv Detail & Related papers (2020-04-29T04:01:52Z) - Incorporating BERT into Neural Machine Translation [251.54280200353674]
We propose a new algorithm named BERT-fused model, in which we first use BERT to extract representations for an input sequence.
We conduct experiments on supervised (including sentence-level and document-level translations), semi-supervised and unsupervised machine translation, and achieve state-of-the-art results on seven benchmark datasets.
arXiv Detail & Related papers (2020-02-17T08:13:36Z) - Multilingual Alignment of Contextual Word Representations [49.42244463346612]
BERT exhibits significantly improved zero-shot performance on XNLI compared to the base model.
We introduce a contextual version of word retrieval and show that it correlates well with downstream zero-shot transfer.
These results support contextual alignment as a useful concept for understanding large multilingual pre-trained models.
arXiv Detail & Related papers (2020-02-10T03:27:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.