BRUMS at SemEval-2020 Task 3: Contextualised Embeddings for Predicting
the (Graded) Effect of Context in Word Similarity
- URL: http://arxiv.org/abs/2010.06269v2
- Date: Thu, 20 May 2021 14:44:16 GMT
- Title: BRUMS at SemEval-2020 Task 3: Contextualised Embeddings for Predicting
the (Graded) Effect of Context in Word Similarity
- Authors: Hansi Hettiarachchi, Tharindu Ranasinghe
- Abstract summary: This paper presents the team BRUMS submission to SemEval-2020 Task 3: Graded Word Similarity in Context.
The system utilise state-of-the-art contextualised word embeddings, which have some task-specific adaptations, including stacked embeddings and average embeddings.
Following the final rankings, our approach is ranked within the top 5 solutions of each language while preserving the 1st position of Finnish subtask 2.
- Score: 9.710464466895521
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents the team BRUMS submission to SemEval-2020 Task 3: Graded
Word Similarity in Context. The system utilises state-of-the-art contextualised
word embeddings, which have some task-specific adaptations, including stacked
embeddings and average embeddings. Overall, the approach achieves good
evaluation scores across all the languages, while maintaining simplicity.
Following the final rankings, our approach is ranked within the top 5 solutions
of each language while preserving the 1st position of Finnish subtask 2.
Related papers
- Compositional Generalization for Data-to-Text Generation [86.79706513098104]
We propose a novel model that addresses compositional generalization by clustering predicates into groups.
Our model generates text in a sentence-by-sentence manner, relying on one cluster of predicates at a time.
It significantly outperforms T5baselines across all evaluation metrics.
arXiv Detail & Related papers (2023-12-05T13:23:15Z) - IXA/Cogcomp at SemEval-2023 Task 2: Context-enriched Multilingual Named
Entity Recognition using Knowledge Bases [53.054598423181844]
We present a novel NER cascade approach comprising three steps.
We empirically demonstrate the significance of external knowledge bases in accurately classifying fine-grained and emerging entities.
Our system exhibits robust performance in the MultiCoNER2 shared task, even in the low-resource language setting.
arXiv Detail & Related papers (2023-04-20T20:30:34Z) - Hierarchical Modular Network for Video Captioning [162.70349114104107]
We propose a hierarchical modular network to bridge video representations and linguistic semantics from three levels before generating captions.
The proposed method performs favorably against the state-of-the-art models on the two widely-used benchmarks: MSVD 104.0% and MSR-VTT 51.5% in CIDEr score.
arXiv Detail & Related papers (2021-11-24T13:07:05Z) - Yseop at FinSim-3 Shared Task 2021: Specializing Financial Domain
Learning with Phrase Representations [0.0]
We present our approaches for the FinSim-3 Shared Task 2021: Learning Semantic Similarities for the Financial Domain.
The aim of this task is to correctly classify a list of given terms from the financial domain into the most relevant hypernym.
Our system ranks 2nd overall on both metrics, scoring 0.917 on Average Accuracy and 1.141 on Mean Rank.
arXiv Detail & Related papers (2021-08-21T10:53:12Z) - SemEval-2020 Task 10: Emphasis Selection for Written Text in Visual
Media [50.29389719723529]
We present the main findings and compare the results of SemEval-2020 Task 10, Emphasis Selection for Written Text in Visual Media.
The goal of this shared task is to design automatic methods for emphasis selection.
The analysis of systems submitted to the task indicates that BERT and RoBERTa were the most common choice of pre-trained models used.
arXiv Detail & Related papers (2020-08-07T17:24:53Z) - MULTISEM at SemEval-2020 Task 3: Fine-tuning BERT for Lexical Meaning [6.167728295758172]
We present the MULTISEM systems submitted to SemEval 2020 Task 3: Graded Word Similarity in Context (GWSC)
We experiment with injecting semantic knowledge into pre-trained BERT models through fine-tuning on lexical semantic tasks related to GWSC.
We use existing semantically annotated datasets and propose to approximate similarity through automatically generated lexical substitutes in context.
arXiv Detail & Related papers (2020-07-24T09:50:26Z) - RUSSE'2020: Findings of the First Taxonomy Enrichment Task for the
Russian language [70.27072729280528]
This paper describes the results of the first shared task on taxonomy enrichment for the Russian language.
16 teams participated in the task demonstrating high results with more than half of them outperforming the provided baseline.
arXiv Detail & Related papers (2020-05-22T13:30:37Z) - UiO-UvA at SemEval-2020 Task 1: Contextualised Embeddings for Lexical
Semantic Change Detection [5.099262949886174]
This paper focuses on Subtask 2, ranking words by the degree of their semantic drift over time.
We find that the most effective algorithms rely on the cosine similarity between averaged token embeddings and the pairwise distances between token embeddings.
arXiv Detail & Related papers (2020-04-30T18:43:57Z) - CIRCE at SemEval-2020 Task 1: Ensembling Context-Free and
Context-Dependent Word Representations [0.0]
We present an ensemble model that makes predictions based on context-free and context-dependent word representations.
The key findings are that (1) context-free word representations are a powerful and robust baseline, (2) a sentence classification objective can be used to obtain useful context-dependent word representations, and (3) combining those representations increases performance on some datasets while decreasing performance on others.
arXiv Detail & Related papers (2020-04-30T13:18:29Z) - A Survey on Contextual Embeddings [48.04732268018772]
Contextual embeddings assign each word a representation based on its context, capturing uses of words across varied contexts and encoding knowledge that transfers across languages.
We review existing contextual embedding models, cross-lingual polyglot pre-training, the application of contextual embeddings in downstream tasks, model compression, and model analyses.
arXiv Detail & Related papers (2020-03-16T15:22:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.