Hone as You Read: A Practical Type of Interactive Summarization
- URL: http://arxiv.org/abs/2105.02923v1
- Date: Thu, 6 May 2021 19:36:40 GMT
- Title: Hone as You Read: A Practical Type of Interactive Summarization
- Authors: Tanner Bohn and Charles X. Ling
- Abstract summary: We present HARE, a new task where reader feedback is used to optimize document summaries for personal interest.
This task is related to interactive summarization, where personalized summaries are produced following a long feedback stage.
We propose to gather minimally-invasive feedback during the reading process to adapt to user interests and augment the document in real-time.
- Score: 6.662800021628275
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present HARE, a new task where reader feedback is used to optimize
document summaries for personal interest during the normal flow of reading.
This task is related to interactive summarization, where personalized summaries
are produced following a long feedback stage where users may read the same
sentences many times. However, this process severely interrupts the flow of
reading, making it impractical for leisurely reading. We propose to gather
minimally-invasive feedback during the reading process to adapt to user
interests and augment the document in real-time. Building off of recent
advances in unsupervised summarization evaluation, we propose a suitable metric
for this task and use it to evaluate a variety of approaches. Our approaches
range from simple heuristics to preference-learning and their analysis provides
insight into this important task. Human evaluation additionally supports the
practicality of HARE. The code to reproduce this work is available at
https://github.com/tannerbohn/HoneAsYouRead.
Related papers
- Annotator in the Loop: A Case Study of In-Depth Rater Engagement to Create a Bridging Benchmark Dataset [1.825224193230824]
We describe a novel, collaborative, and iterative annotator-in-the-loop methodology for annotation.
Our findings indicate that collaborative engagement with annotators can enhance annotation methods.
arXiv Detail & Related papers (2024-08-01T19:11:08Z) - Long-Span Question-Answering: Automatic Question Generation and QA-System Ranking via Side-by-Side Evaluation [65.16137964758612]
We explore the use of long-context capabilities in large language models to create synthetic reading comprehension data from entire books.
Our objective is to test the capabilities of LLMs to analyze, understand, and reason over problems that require a detailed comprehension of long spans of text.
arXiv Detail & Related papers (2024-05-31T20:15:10Z) - Narrative Action Evaluation with Prompt-Guided Multimodal Interaction [60.281405999483]
Narrative action evaluation (NAE) aims to generate professional commentary that evaluates the execution of an action.
NAE is a more challenging task because it requires both narrative flexibility and evaluation rigor.
We propose a prompt-guided multimodal interaction framework to facilitate the interaction between different modalities of information.
arXiv Detail & Related papers (2024-04-22T17:55:07Z) - Previously on the Stories: Recap Snippet Identification for Story
Reading [51.641565531840186]
We propose the first benchmark on this useful task called Recap Snippet Identification with a hand-crafted evaluation dataset.
Our experiments show that the proposed task is challenging to PLMs, LLMs, and proposed methods as the task requires a deep understanding of the plot correlation between snippets.
arXiv Detail & Related papers (2024-02-11T18:27:14Z) - Summarization with Graphical Elements [55.5913491389047]
We propose a new task: summarization with graphical elements.
We collect a high quality human labeled dataset to support research into the task.
arXiv Detail & Related papers (2022-04-15T17:16:41Z) - Make The Most of Prior Data: A Solution for Interactive Text
Summarization with Preference Feedback [15.22874706089491]
We introduce a new framework to train summarization models with preference feedback interactively.
By properly leveraging offline data and a novel reward model, we improve the performance regarding ROUGE scores and sample-efficiency.
arXiv Detail & Related papers (2022-04-12T03:56:59Z) - FineDiving: A Fine-grained Dataset for Procedure-aware Action Quality
Assessment [93.09267863425492]
We argue that understanding both high-level semantics and internal temporal structures of actions in competitive sports videos is the key to making predictions accurate and interpretable.
We construct a new fine-grained dataset, called FineDiving, developed on diverse diving events with detailed annotations on action procedures.
arXiv Detail & Related papers (2022-04-07T17:59:32Z) - Adaptive Summaries: A Personalized Concept-based Summarization Approach
by Learning from Users' Feedback [0.0]
This paper proposes an interactive concept-based summarization model, called Adaptive Summaries.
The system learns from users' provided information gradually while interacting with the system by giving feedback in an iterative loop.
It helps users make high-quality summaries based on their preferences by maximizing the user-desired content in the generated summaries.
arXiv Detail & Related papers (2020-12-24T18:27:50Z) - Read what you need: Controllable Aspect-based Opinion Summarization of
Tourist Reviews [23.7107052882747]
We argue the need and propose a solution for generating personalized aspect-based opinion summaries from online tourist reviews.
We let our readers decide and control several attributes of the summary such as the length and specific aspects of interest.
Specifically, we take an unsupervised approach to extract coherent aspects from tourist reviews posted on TripAdvisor.
arXiv Detail & Related papers (2020-06-08T15:03:38Z) - ORB: An Open Reading Benchmark for Comprehensive Evaluation of Machine
Reading Comprehension [53.037401638264235]
We present an evaluation server, ORB, that reports performance on seven diverse reading comprehension datasets.
The evaluation server places no restrictions on how models are trained, so it is a suitable test bed for exploring training paradigms and representation learning.
arXiv Detail & Related papers (2019-12-29T07:27:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.