IITK at SemEval-2020 Task 10: Transformers for Emphasis Selection
- URL: http://arxiv.org/abs/2007.10820v1
- Date: Tue, 21 Jul 2020 14:05:56 GMT
- Title: IITK at SemEval-2020 Task 10: Transformers for Emphasis Selection
- Authors: Vipul Singhal, Sahil Dhull, Rishabh Agarwal and Ashutosh Modi
- Abstract summary: This paper describes the system proposed for addressing the research problem posed in Task 10 of SemEval-2020: Emphasis Selection For Written Text in Visual Media.
We propose an end-to-end model that takes as input the text and corresponding to each word gives the probability of the word to be emphasized.
- Score: 8.352123313770552
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper describes the system proposed for addressing the research problem
posed in Task 10 of SemEval-2020: Emphasis Selection For Written Text in Visual
Media. We propose an end-to-end model that takes as input the text and
corresponding to each word gives the probability of the word to be emphasized.
Our results show that transformer-based models are particularly effective in
this task. We achieved the best Matchm score (described in section 2.2) of
0.810 and were ranked third on the leaderboard.
Related papers
- Text2Topic: Multi-Label Text Classification System for Efficient Topic
Detection in User Generated Content with Zero-Shot Capabilities [2.7311827519141363]
We propose Text to Topic (Text2Topic), which achieves high multi-label classification performance.
Text2Topic supports zero-shot predictions, produces domain-specific text embeddings, and enables production-scale batch-inference.
The model is deployed on a real-world stream processing platform, and it outperforms other models with 92.9% micro mAP.
arXiv Detail & Related papers (2023-10-23T11:33:24Z) - KINLP at SemEval-2023 Task 12: Kinyarwanda Tweet Sentiment Analysis [1.2183405753834562]
This paper describes the system entered by the author to the SemEval-2023 Task 12: Sentiment analysis for African languages.
The system focuses on the Kinyarwanda language and uses a language-specific model.
arXiv Detail & Related papers (2023-04-25T04:30:03Z) - BJTU-WeChat's Systems for the WMT22 Chat Translation Task [66.81525961469494]
This paper introduces the joint submission of the Beijing Jiaotong University and WeChat AI to the WMT'22 chat translation task for English-German.
Based on the Transformer, we apply several effective variants.
Our systems achieve 0.810 and 0.946 COMET scores.
arXiv Detail & Related papers (2022-11-28T02:35:04Z) - Tencent AI Lab - Shanghai Jiao Tong University Low-Resource Translation
System for the WMT22 Translation Task [49.916963624249355]
This paper describes Tencent AI Lab - Shanghai Jiao Tong University (TAL-SJTU) Low-Resource Translation systems for the WMT22 shared task.
We participate in the general translation task on English$Leftrightarrow$Livonian.
Our system is based on M2M100 with novel techniques that adapt it to the target language pair.
arXiv Detail & Related papers (2022-10-17T04:34:09Z) - On Prosody Modeling for ASR+TTS based Voice Conversion [82.65378387724641]
In voice conversion, an approach showing promising results in the latest voice conversion challenge (VCC) 2020 is to first use an automatic speech recognition (ASR) model to transcribe the source speech into the underlying linguistic contents.
Such a paradigm, referred to as ASR+TTS, overlooks the modeling of prosody, which plays an important role in speech naturalness and conversion similarity.
We propose to directly predict prosody from the linguistic representation in a target-speaker-dependent manner, referred to as target text prediction (TTP)
arXiv Detail & Related papers (2021-07-20T13:30:23Z) - MIDAS at SemEval-2020 Task 10: Emphasis Selection using Label
Distribution Learning and Contextual Embeddings [46.973153861604416]
This paper presents our submission to the SemEval 2020 - Task 10 on emphasis selection in written text.
We approach this emphasis selection problem as a sequence labeling task where we represent the underlying text with contextual embedding models.
Our best performing architecture is an ensemble of different models, which achieved an overall matching score of 0.783, placing us 15th out of 31 participating teams.
arXiv Detail & Related papers (2020-09-06T00:15:33Z) - SemEval-2020 Task 10: Emphasis Selection for Written Text in Visual
Media [50.29389719723529]
We present the main findings and compare the results of SemEval-2020 Task 10, Emphasis Selection for Written Text in Visual Media.
The goal of this shared task is to design automatic methods for emphasis selection.
The analysis of systems submitted to the task indicates that BERT and RoBERTa were the most common choice of pre-trained models used.
arXiv Detail & Related papers (2020-08-07T17:24:53Z) - Context-based Transformer Models for Answer Sentence Selection [109.96739477808134]
In this paper, we analyze the role of the contextual information in the sentence selection task.
We propose a Transformer based architecture that leverages two types of contexts, local and global.
The results show that the combination of local and global contexts in a Transformer model significantly improves the accuracy in Answer Sentence Selection.
arXiv Detail & Related papers (2020-06-01T21:52:19Z) - UiO-UvA at SemEval-2020 Task 1: Contextualised Embeddings for Lexical
Semantic Change Detection [5.099262949886174]
This paper focuses on Subtask 2, ranking words by the degree of their semantic drift over time.
We find that the most effective algorithms rely on the cosine similarity between averaged token embeddings and the pairwise distances between token embeddings.
arXiv Detail & Related papers (2020-04-30T18:43:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.