Adaptive Summaries: A Personalized Concept-based Summarization Approach
by Learning from Users' Feedback
- URL: http://arxiv.org/abs/2012.13387v1
- Date: Thu, 24 Dec 2020 18:27:50 GMT
- Title: Adaptive Summaries: A Personalized Concept-based Summarization Approach
by Learning from Users' Feedback
- Authors: Samira Ghodratnama and Mehrdad Zakershahrak and Fariborz Sobhanmanesh
- Abstract summary: This paper proposes an interactive concept-based summarization model, called Adaptive Summaries.
The system learns from users' provided information gradually while interacting with the system by giving feedback in an iterative loop.
It helps users make high-quality summaries based on their preferences by maximizing the user-desired content in the generated summaries.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Exploring the tremendous amount of data efficiently to make a decision,
similar to answering a complicated question, is challenging with many
real-world application scenarios. In this context, automatic summarization has
substantial importance as it will provide the foundation for big data analytic.
Traditional summarization approaches optimize the system to produce a short
static summary that fits all users that do not consider the subjectivity aspect
of summarization, i.e., what is deemed valuable for different users, making
these approaches impractical in real-world use cases. This paper proposes an
interactive concept-based summarization model, called Adaptive Summaries, that
helps users make their desired summary instead of producing a single inflexible
summary. The system learns from users' provided information gradually while
interacting with the system by giving feedback in an iterative loop. Users can
choose either reject or accept action for selecting a concept being included in
the summary with the importance of that concept from users' perspectives and
confidence level of their feedback. The proposed approach can guarantee
interactive speed to keep the user engaged in the process. Furthermore, it
eliminates the need for reference summaries, which is a challenging issue for
summarization tasks. Evaluations show that Adaptive Summaries helps users make
high-quality summaries based on their preferences by maximizing the
user-desired content in the generated summaries.
Related papers
- UserSumBench: A Benchmark Framework for Evaluating User Summarization Approaches [25.133460380551327]
Large language models (LLMs) have shown remarkable capabilities in generating user summaries from a long list of raw user activity data.
These summaries capture essential user information such as preferences and interests, and are invaluable for personalization applications.
However, the development of new summarization techniques is hindered by the lack of ground-truth labels, the inherent subjectivity of user summaries, and human evaluation.
arXiv Detail & Related papers (2024-08-30T01:56:57Z) - Retrieval Augmentation via User Interest Clustering [57.63883506013693]
Industrial recommender systems are sensitive to the patterns of user-item engagement.
We propose a novel approach that efficiently constructs user interest and facilitates low computational cost inference.
Our approach has been deployed in multiple products at Meta, facilitating short-form video related recommendation.
arXiv Detail & Related papers (2024-08-07T16:35:10Z) - SumRecom: A Personalized Summarization Approach by Learning from Users' Feedback [0.6629765271909505]
We propose a solution to a substantial and challenging problem in summarization, i.e., recommending a summary for a specific user.
The proposed approach, called SumRecom, brings the human into the loop and focuses on three aspects: personalization, interaction, and learning user's interest without the need for reference summaries.
arXiv Detail & Related papers (2024-08-02T22:33:59Z) - Towards Enhancing Coherence in Extractive Summarization: Dataset and Experiments with LLMs [70.15262704746378]
We propose a systematically created human-annotated dataset consisting of coherent summaries for five publicly available datasets and natural language user feedback.
Preliminary experiments with Falcon-40B and Llama-2-13B show significant performance improvements (10% Rouge-L) in terms of producing coherent summaries.
arXiv Detail & Related papers (2024-07-05T20:25:04Z) - AaKOS: Aspect-adaptive Knowledge-based Opinion Summarization [5.4138734778206]
The rapid growth of information on the Internet has led to an overwhelming amount of opinions and comments on various activities, products, and services.
This makes it difficult and time-consuming for users to process all the available information when making decisions.
We propose an Aspect-adaptive Knowledge-based Opinion Summarization model for product reviews.
arXiv Detail & Related papers (2023-05-26T03:44:35Z) - Human-in-the-loop Abstractive Dialogue Summarization [61.4108097664697]
We propose to incorporate different levels of human feedback into the training process.
This will enable us to guide the models to capture the behaviors humans care about for summaries.
arXiv Detail & Related papers (2022-12-19T19:11:27Z) - Simulating Bandit Learning from User Feedback for Extractive Question
Answering [51.97943858898579]
We study learning from user feedback for extractive question answering by simulating feedback using supervised data.
We show that systems initially trained on a small number of examples can dramatically improve given feedback from users on model-predicted answers.
arXiv Detail & Related papers (2022-03-18T17:47:58Z) - iFacetSum: Coreference-based Interactive Faceted Summarization for
Multi-Document Exploration [63.272359227081836]
iFacetSum integrates interactive summarization together with faceted search.
Fine-grained facets are automatically produced based on cross-document coreference pipelines.
arXiv Detail & Related papers (2021-09-23T20:01:11Z) - Text Summarization with Latent Queries [60.468323530248945]
We introduce LaQSum, the first unified text summarization system that learns Latent Queries from documents for abstractive summarization with any existing query forms.
Under a deep generative framework, our system jointly optimize a latent query model and a conditional language model, allowing users to plug-and-play queries of any type at test time.
Our system robustly outperforms strong comparison systems across summarization benchmarks with different query types, document settings, and target domains.
arXiv Detail & Related papers (2021-05-31T21:14:58Z) - Hone as You Read: A Practical Type of Interactive Summarization [6.662800021628275]
We present HARE, a new task where reader feedback is used to optimize document summaries for personal interest.
This task is related to interactive summarization, where personalized summaries are produced following a long feedback stage.
We propose to gather minimally-invasive feedback during the reading process to adapt to user interests and augment the document in real-time.
arXiv Detail & Related papers (2021-05-06T19:36:40Z) - Large-scale Hybrid Approach for Predicting User Satisfaction with
Conversational Agents [28.668681892786264]
Measuring user satisfaction level is a challenging task, and a critical component in developing large-scale conversational agent systems.
Human annotation based approaches are easier to control, but hard to scale.
A novel alternative approach is to collect user's direct feedback via a feedback elicitation system embedded to the conversational agent system.
arXiv Detail & Related papers (2020-05-29T16:29:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.