Dimsum @LaySumm 20: BART-based Approach for Scientific Document
Summarization
- URL: http://arxiv.org/abs/2010.09252v1
- Date: Mon, 19 Oct 2020 06:36:11 GMT
- Title: Dimsum @LaySumm 20: BART-based Approach for Scientific Document
Summarization
- Authors: Tiezheng Yu and Dan Su and Wenliang Dai and Pascale Fung
- Abstract summary: We build a lay summary generation system based on the BART model.
We leverage sentence labels as extra supervision signals to improve the performance of lay summarization.
- Score: 50.939885303186195
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Lay summarization aims to generate lay summaries of scientific papers
automatically. It is an essential task that can increase the relevance of
science for all of society. In this paper, we build a lay summary generation
system based on the BART model. We leverage sentence labels as extra
supervision signals to improve the performance of lay summarization. In the
CL-LaySumm 2020 shared task, our model achieves 46.00\% Rouge1-F1 score.
Related papers
- BeeManc at the PLABA Track of TAC-2024: RoBERTa for task 1 -- LLaMA3.1 and GPT-4o for task 2 [11.380751114611368]
This report contains two sections corresponding to the two sub-tasks in PLABA 2024.
In task one, we applied fine-tuned ReBERTa-Base models to identify and classify the difficult terms, jargon and acronyms in the biomedical abstracts and reported the F1 score.
In task two, we leveraged Llamma3.1-70B-Instruct and GPT-4o with the one-shot prompts to complete the abstract adaptation and reported the scores in BLEU, SARI, BERTScore, LENS, and SALSA.
arXiv Detail & Related papers (2024-11-11T21:32:06Z) - Write Summary Step-by-Step: A Pilot Study of Stepwise Summarization [48.57273563299046]
We propose the task of Stepwise Summarization, which aims to generate a new appended summary each time a new document is proposed.
The appended summary should not only summarize the newly added content but also be coherent with the previous summary.
We show that SSG achieves state-of-the-art performance in terms of both automatic metrics and human evaluations.
arXiv Detail & Related papers (2024-06-08T05:37:26Z) - Information-Theoretic Distillation for Reference-less Summarization [67.51150817011617]
We present a novel framework to distill a powerful summarizer based on the information-theoretic objective for summarization.
We start off from Pythia-2.8B as the teacher model, which is not yet capable of summarization.
We arrive at a compact but powerful summarizer with only 568M parameters that performs competitively against ChatGPT.
arXiv Detail & Related papers (2024-03-20T17:42:08Z) - Overview of the BioLaySumm 2023 Shared Task on Lay Summarization of
Biomedical Research Articles [47.04555835353173]
This paper presents the results of the shared task on Lay Summarisation of Biomedical Research Articles (BioLaySumm) hosted at the BioNLP Workshop at ACL 2023.
The goal of this shared task is to develop abstractive summarisation models capable of generating "lay summaries"
In addition to overall results, we report on the setup and insights from the BioLaySumm shared task, which attracted a total of 20 participating teams across both subtasks.
arXiv Detail & Related papers (2023-09-29T15:43:42Z) - Scientific Paper Extractive Summarization Enhanced by Citation Graphs [50.19266650000948]
We focus on leveraging citation graphs to improve scientific paper extractive summarization under different settings.
Preliminary results demonstrate that citation graph is helpful even in a simple unsupervised framework.
Motivated by this, we propose a Graph-based Supervised Summarization model (GSS) to achieve more accurate results on the task when large-scale labeled data are available.
arXiv Detail & Related papers (2022-12-08T11:53:12Z) - COLO: A Contrastive Learning based Re-ranking Framework for One-Stage
Summarization [84.70895015194188]
We propose a Contrastive Learning based re-ranking framework for one-stage summarization called COLO.
COLO boosts the extractive and abstractive results of one-stage systems on CNN/DailyMail benchmark to 44.58 and 46.33 ROUGE-1 score.
arXiv Detail & Related papers (2022-09-29T06:11:21Z) - Bengali Abstractive News Summarization(BANS): A Neural Attention
Approach [0.8793721044482612]
We present a seq2seq based Long Short-Term Memory (LSTM) network model with attention at encoder-decoder.
Our proposed system deploys a local attention-based model that produces a long sequence of words with lucid and human-like generated sentences.
We also prepared a dataset of more than 19k articles and corresponding human-written summaries collected from bangla.bdnews24.com1.
arXiv Detail & Related papers (2020-12-03T08:17:31Z) - SciSummPip: An Unsupervised Scientific Paper Summarization Pipeline [39.46301416663324]
We describe our text summarization system, SciSummPip, inspired by SummPip (Zhao et al., 2020)
Our SciSummPip includes a transformer-based language model SciBERT for contextual sentence representation.
Our work differs from previous method in that content selection and a summary length constraint is applied to adapt to the scientific domain.
arXiv Detail & Related papers (2020-10-19T03:29:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.