WNUT-2020 Task 1 Overview: Extracting Entities and Relations from Wet
Lab Protocols
- URL: http://arxiv.org/abs/2010.14576v3
- Date: Thu, 19 Nov 2020 03:06:32 GMT
- Title: WNUT-2020 Task 1 Overview: Extracting Entities and Relations from Wet
Lab Protocols
- Authors: Jeniya Tabassum, Sydney Lee, Wei Xu, Alan Ritter
- Abstract summary: This paper presents the results of the wet lab information extraction task at WNUT 2020.
We outline the task, data annotation process, corpus statistics, and provide a high-level overview of the participating systems for each sub task.
- Score: 17.782052529098927
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents the results of the wet lab information extraction task at
WNUT 2020. This task consisted of two sub tasks: (1) a Named Entity Recognition
(NER) task with 13 participants and (2) a Relation Extraction (RE) task with 2
participants. We outline the task, data annotation process, corpus statistics,
and provide a high-level overview of the participating systems for each sub
task.
Related papers
- Overview of the PromptCBLUE Shared Task in CHIP2023 [26.56584015791646]
This paper presents an overview of the PromptC BLUE shared task held in the CHIP-2023 Conference.
It provides a good testbed for Chinese open-domain or medical-domain large language models (LLMs) in general medical natural language processing.
This paper describes the tasks, the datasets, evaluation metrics, and the top systems for both tasks.
arXiv Detail & Related papers (2023-12-29T09:05:00Z) - ArAIEval Shared Task: Persuasion Techniques and Disinformation Detection
in Arabic Text [41.3267575540348]
We present an overview of the ArAIEval shared task, organized as part of the first Arabic 2023 conference co-located with EMNLP 2023.
ArAIEval offers two tasks over Arabic text: (i) persuasion technique detection, focusing on identifying persuasion techniques in tweets and news articles, and (ii) disinformation detection in binary and multiclass setups over tweets.
A total of 20 teams participated in the final evaluation phase, with 14 and 16 teams participating in Tasks 1 and 2, respectively.
arXiv Detail & Related papers (2023-11-06T15:21:19Z) - BLP-2023 Task 2: Sentiment Analysis [7.725694295666573]
We present an overview of the BLP Sentiment Shared Task, organized as part of the inaugural BLP 2023 workshop.
The task is defined as the detection of sentiment in a given piece of social media text.
This paper provides a detailed account of the task setup, including dataset development and evaluation setup.
arXiv Detail & Related papers (2023-10-24T21:00:41Z) - Overview of the BioLaySumm 2023 Shared Task on Lay Summarization of
Biomedical Research Articles [47.04555835353173]
This paper presents the results of the shared task on Lay Summarisation of Biomedical Research Articles (BioLaySumm) hosted at the BioNLP Workshop at ACL 2023.
The goal of this shared task is to develop abstractive summarisation models capable of generating "lay summaries"
In addition to overall results, we report on the setup and insights from the BioLaySumm shared task, which attracted a total of 20 participating teams across both subtasks.
arXiv Detail & Related papers (2023-09-29T15:43:42Z) - SemEval-2022 Task 2: Multilingual Idiomaticity Detection and Sentence
Embedding [12.843166994677286]
This paper presents the shared task on Multilingualaticity Detection and Sentence Embedding.
It consists of two subtasks: (a) a binary classification one aimed at identifying whether a sentence contains an idiomatic expression, and (b) a task based on semantic text similarity which requires the model to adequately represent potentially idiomatic expressions in context.
The task had close to 100 registered participants organised into twenty five teams making over 650 and 150 submissions in the practice and evaluation phases respectively.
arXiv Detail & Related papers (2022-04-21T12:20:52Z) - IAM: A Comprehensive and Large-Scale Dataset for Integrated Argument
Mining Tasks [59.457948080207174]
In this work, we introduce a comprehensive and large dataset named IAM, which can be applied to a series of argument mining tasks.
Near 70k sentences in the dataset are fully annotated based on their argument properties.
We propose two new integrated argument mining tasks associated with the debate preparation process: (1) claim extraction with stance classification (CESC) and (2) claim-evidence pair extraction (CEPE)
arXiv Detail & Related papers (2022-03-23T08:07:32Z) - Transfer Learning in Conversational Analysis through Reusing
Preprocessing Data as Supervisors [52.37504333689262]
Using noisy labels in single-task learning increases the risk of over-fitting.
Auxiliary tasks could improve the performance of the primary task learning during the same training.
arXiv Detail & Related papers (2021-12-02T08:40:42Z) - Zero-Shot Information Extraction as a Unified Text-to-Triple Translation [56.01830747416606]
We cast a suite of information extraction tasks into a text-to-triple translation framework.
We formalize the task as a translation between task-specific input text and output triples.
We study the zero-shot performance of this framework on open information extraction.
arXiv Detail & Related papers (2021-09-23T06:54:19Z) - Efficiently Identifying Task Groupings for Multi-Task Learning [55.80489920205404]
Multi-task learning can leverage information learned by one task to benefit the training of other tasks.
We suggest an approach to select which tasks should train together in multi-task learning models.
Our method determines task groupings in a single training run by co-training all tasks together and quantifying the effect to which one task's gradient would affect another task's loss.
arXiv Detail & Related papers (2021-09-10T02:01:43Z) - ICDAR 2021 Competition on Components Segmentation Task of Document
Photos [63.289361617237944]
Three challenge tasks were proposed entailing different segmentation assignments to be performed on a provided dataset.
The collected data are from several types of Brazilian ID documents, whose personal information was conveniently replaced.
Different Deep Learning models were applied by the entrants with diverse strategies to achieve the best results in each of the tasks.
arXiv Detail & Related papers (2021-06-16T00:49:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.