Summarization with Graphical Elements
- URL: http://arxiv.org/abs/2204.07551v1
- Date: Fri, 15 Apr 2022 17:16:41 GMT
- Title: Summarization with Graphical Elements
- Authors: Maartje ter Hoeve, Julia Kiseleva, Maarten de Rijke
- Abstract summary: We propose a new task: summarization with graphical elements.
We collect a high quality human labeled dataset to support research into the task.
- Score: 55.5913491389047
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic text summarization has experienced substantial progress in recent
years. With this progress, the question has arisen whether the types of
summaries that are typically generated by automatic summarization models align
with users' needs. Ter Hoeve et al (2020) answer this question negatively.
Amongst others, they recommend focusing on generating summaries with more
graphical elements. This is in line with what we know from the
psycholinguistics literature about how humans process text. Motivated from
these two angles, we propose a new task: summarization with graphical elements,
and we verify that these summaries are helpful for a critical mass of people.
We collect a high quality human labeled dataset to support research into the
task. We present a number of baseline methods that show that the task is
interesting and challenging. Hence, with this work we hope to inspire a new
line of research within the automatic summarization community.
Related papers
- What Makes a Good Story and How Can We Measure It? A Comprehensive Survey of Story Evaluation [57.550045763103334]
evaluating a story can be more challenging than other generation evaluation tasks.
We first summarize existing storytelling tasks, including text-to-text, visual-to-text, and text-to-visual.
We propose a taxonomy to organize evaluation metrics that have been developed or can be adopted for story evaluation.
arXiv Detail & Related papers (2024-08-26T20:35:42Z) - Controlled Text Reduction [15.102190738450092]
We formalize textitControlled Text Reduction as a standalone task.
A model then needs to generate a coherent text that includes all and only the target information.
arXiv Detail & Related papers (2022-10-24T17:59:03Z) - Automatic Text Summarization Methods: A Comprehensive Review [1.6114012813668934]
This study provides a detailed analysis of text summarization concepts such as summarization approaches, techniques used, standard datasets, evaluation metrics and future scopes for research.
arXiv Detail & Related papers (2022-03-03T10:45:00Z) - AnswerSumm: A Manually-Curated Dataset and Pipeline for Answer
Summarization [73.91543616777064]
Community Question Answering (CQA) fora such as Stack Overflow and Yahoo! Answers contain a rich resource of answers to a wide range of community-based questions.
One goal of answer summarization is to produce a summary that reflects the range of answer perspectives.
This work introduces a novel dataset of 4,631 CQA threads for answer summarization, curated by professional linguists.
arXiv Detail & Related papers (2021-11-11T21:48:02Z) - Recursively Summarizing Books with Human Feedback [10.149048526411434]
We present progress on the task of abstractive summarization of entire fiction novels.
We use models trained on smaller parts of the task to assist humans in giving feedback on the broader task.
We achieve state-of-the-art results on the recent BookSum dataset for book-length summarization.
arXiv Detail & Related papers (2021-09-22T17:34:18Z) - EmailSum: Abstractive Email Thread Summarization [105.46012304024312]
We develop an abstractive Email Thread Summarization (EmailSum) dataset.
This dataset contains human-annotated short (30 words) and long (100 words) summaries of 2549 email threads.
Our results reveal the key challenges of current abstractive summarization models in this task.
arXiv Detail & Related papers (2021-07-30T15:13:14Z) - What Makes a Good Summary? Reconsidering the Focus of Automatic
Summarization [49.600619575148706]
We find that the current focus of the field does not fully align with participants' wishes.
Based on our findings, we argue that it is important to adopt a broader perspective on automatic summarization.
arXiv Detail & Related papers (2020-12-14T15:12:35Z) - Inquisitive Question Generation for High Level Text Comprehension [60.21497846332531]
We introduce INQUISITIVE, a dataset of 19K questions that are elicited while a person is reading through a document.
We show that readers engage in a series of pragmatic strategies to seek information.
We evaluate question generation models based on GPT-2 and show that our model is able to generate reasonable questions.
arXiv Detail & Related papers (2020-10-04T19:03:39Z) - Exploring Content Selection in Summarization of Novel Chapters [19.11830806780343]
We present a new summarization task, generating summaries of novel chapters using summary/chapter pairs from online study guides.
This is a harder task than the news summarization task, given the chapter length as well as the extreme paraphrasing and generalization found in the summaries.
We focus on extractive summarization, which requires the creation of a gold-standard set of extractive summaries.
arXiv Detail & Related papers (2020-05-04T20:45:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.