Overview of the BioLaySumm 2023 Shared Task on Lay Summarization of
Biomedical Research Articles
- URL: http://arxiv.org/abs/2309.17332v2
- Date: Wed, 25 Oct 2023 08:16:43 GMT
- Title: Overview of the BioLaySumm 2023 Shared Task on Lay Summarization of
Biomedical Research Articles
- Authors: Tomas Goldsack, Zheheng Luo, Qianqian Xie, Carolina Scarton, Matthew
Shardlow, Sophia Ananiadou, Chenghua Lin
- Abstract summary: This paper presents the results of the shared task on Lay Summarisation of Biomedical Research Articles (BioLaySumm) hosted at the BioNLP Workshop at ACL 2023.
The goal of this shared task is to develop abstractive summarisation models capable of generating "lay summaries"
In addition to overall results, we report on the setup and insights from the BioLaySumm shared task, which attracted a total of 20 participating teams across both subtasks.
- Score: 47.04555835353173
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents the results of the shared task on Lay Summarisation of
Biomedical Research Articles (BioLaySumm), hosted at the BioNLP Workshop at ACL
2023. The goal of this shared task is to develop abstractive summarisation
models capable of generating "lay summaries" (i.e., summaries that are
comprehensible to non-technical audiences) in both a controllable and
non-controllable setting. There are two subtasks: 1) Lay Summarisation, where
the goal is for participants to build models for lay summary generation only,
given the full article text and the corresponding abstract as input; and 2)
Readability-controlled Summarisation, where the goal is for participants to
train models to generate both the technical abstract and the lay summary, given
an article's main text as input. In addition to overall results, we report on
the setup and insights from the BioLaySumm shared task, which attracted a total
of 20 participating teams across both subtasks.
Related papers
- Overview of the BioLaySumm 2024 Shared Task on the Lay Summarization of Biomedical Research Articles [21.856049605149646]
This paper presents the setup and results of the second edition of the BioLaySumm shared task on the Lay Summarisation of Biomedical Research Articles.
We aim to build on the first edition's success by further increasing research interest in this important task and encouraging participants to explore novel approaches.
Overall, our results show that a broad range of innovative approaches were adopted by task participants, with a predictable shift towards the use of Large Language Models (LLMs)
arXiv Detail & Related papers (2024-08-16T07:00:08Z) - Overview of the PromptCBLUE Shared Task in CHIP2023 [26.56584015791646]
This paper presents an overview of the PromptC BLUE shared task held in the CHIP-2023 Conference.
It provides a good testbed for Chinese open-domain or medical-domain large language models (LLMs) in general medical natural language processing.
This paper describes the tasks, the datasets, evaluation metrics, and the top systems for both tasks.
arXiv Detail & Related papers (2023-12-29T09:05:00Z) - Summary-Oriented Vision Modeling for Multimodal Abstractive
Summarization [63.320005222549646]
Multimodal abstractive summarization (MAS) aims to produce a concise summary given the multimodal data (text and vision)
We propose to improve the summary quality through summary-oriented visual features.
Experiments on 44 languages, covering mid-high, low-, and zero-resource scenarios, verify the effectiveness and superiority of the proposed approach.
arXiv Detail & Related papers (2022-12-15T09:05:26Z) - LED down the rabbit hole: exploring the potential of global attention
for biomedical multi-document summarisation [59.307534363825816]
We adapt PRIMERA to the biomedical domain by placing global attention on important biomedical entities.
We analyse the outputs of the 23 resulting models, and report patterns in the results related to the presence of additional global attention.
arXiv Detail & Related papers (2022-09-19T01:13:42Z) - Template-based Abstractive Microblog Opinion Summarisation [26.777997436856076]
We introduce the task of microblog opinion summarisation (MOS) and share a dataset of 3100 gold-standard opinion summaries.
The dataset contains summaries of tweets spanning a 2-year period and covers more topics than any other public Twitter summarisation dataset.
arXiv Detail & Related papers (2022-08-08T12:16:01Z) - SemEval-2021 Task 4: Reading Comprehension of Abstract Meaning [47.49596196559958]
This paper introduces the SemEval-2021 shared task 4: Reading of Abstract Meaning (ReCAM)
Given a passage and the corresponding question, a participating system is expected to choose the correct answer from five candidates of abstract concepts.
Subtask 1 aims to evaluate how well a system can model concepts that cannot be directly perceived in the physical world.
Subtask 2 focuses on models' ability in comprehending nonspecific concepts located high in a hypernym hierarchy.
Subtask 3 aims to provide some insights into models' generalizability over the two types of abstractness.
arXiv Detail & Related papers (2021-05-31T11:04:17Z) - Controllable Abstractive Dialogue Summarization with Sketch Supervision [56.59357883827276]
Our model achieves state-of-the-art performance on the largest dialogue summarization corpus SAMSum, with as high as 50.79 in ROUGE-L score.
arXiv Detail & Related papers (2021-05-28T19:05:36Z) - Topic-Centric Unsupervised Multi-Document Summarization of Scientific
and News Articles [3.0504782036247438]
We propose a topic-centric unsupervised multi-document summarization framework to generate abstractive summaries.
The proposed algorithm generates an abstractive summary by developing salient language unit selection and text generation techniques.
Our approach matches the state-of-the-art when evaluated on automated extractive evaluation metrics and performs better for abstractive summarization on five human evaluation metrics.
arXiv Detail & Related papers (2020-11-03T04:04:21Z) - Dimsum @LaySumm 20: BART-based Approach for Scientific Document
Summarization [50.939885303186195]
We build a lay summary generation system based on the BART model.
We leverage sentence labels as extra supervision signals to improve the performance of lay summarization.
arXiv Detail & Related papers (2020-10-19T06:36:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.