Rehearse With User: Personalized Opinion Summarization via Role-Playing based on Large Language Models
- URL: http://arxiv.org/abs/2503.00449v1
- Date: Sat, 01 Mar 2025 11:05:01 GMT
- Title: Rehearse With User: Personalized Opinion Summarization via Role-Playing based on Large Language Models
- Authors: Yanyue Zhang, Yulan He, Deyu Zhou,
- Abstract summary: Large language models face difficulties in personalized tasks involving long texts.<n>Having the model act as the user, the model can better understand the user's personalized needs.<n>Our method can effectively improve the level of personalization in large model-generated summaries.
- Score: 29.870187698924852
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Personalized opinion summarization is crucial as it considers individual user interests while generating product summaries. Recent studies show that although large language models demonstrate powerful text summarization and evaluation capabilities without the need for training data, they face difficulties in personalized tasks involving long texts. To address this, \textbf{Rehearsal}, a personalized opinion summarization framework via LLMs-based role-playing is proposed. Having the model act as the user, the model can better understand the user's personalized needs. Additionally, a role-playing supervisor and practice process are introduced to improve the role-playing ability of the LLMs, leading to a better expression of user needs. Furthermore, through suggestions from virtual users, the summary generation is intervened, ensuring that the generated summary includes information of interest to the user, thus achieving personalized summary generation. Experiment results demonstrate that our method can effectively improve the level of personalization in large model-generated summaries.
Related papers
- Personalized Graph-Based Retrieval for Large Language Models [51.7278897841697]
We propose a framework that leverages user-centric knowledge graphs to enrich personalization.<n>By directly integrating structured user knowledge into the retrieval process and augmenting prompts with user-relevant context, PGraph enhances contextual understanding and output quality.<n>We also introduce the Personalized Graph-based Benchmark for Text Generation, designed to evaluate personalized text generation tasks in real-world settings where user history is sparse or unavailable.
arXiv Detail & Related papers (2025-01-04T01:46:49Z) - UserSumBench: A Benchmark Framework for Evaluating User Summarization Approaches [25.133460380551327]
Large language models (LLMs) have shown remarkable capabilities in generating user summaries from a long list of raw user activity data.
These summaries capture essential user information such as preferences and interests, and are invaluable for personalization applications.
However, the development of new summarization techniques is hindered by the lack of ground-truth labels, the inherent subjectivity of user summaries, and human evaluation.
arXiv Detail & Related papers (2024-08-30T01:56:57Z) - Towards Enhancing Coherence in Extractive Summarization: Dataset and Experiments with LLMs [70.15262704746378]
We propose a systematically created human-annotated dataset consisting of coherent summaries for five publicly available datasets and natural language user feedback.
Preliminary experiments with Falcon-40B and Llama-2-13B show significant performance improvements (10% Rouge-L) in terms of producing coherent summaries.
arXiv Detail & Related papers (2024-07-05T20:25:04Z) - Role-playing Prompt Framework: Generation and Evaluation [3.2845546753303867]
Large language models (LLMs) exhibit impressive proficiency in natural language generation, understanding user instructions, and emulating human-like language use.
This paper introduces a prompt-based framework designed to leverage GPT's capabilities for the generation of role-playing dialogue datasets.
arXiv Detail & Related papers (2024-06-02T06:09:56Z) - Persona-DB: Efficient Large Language Model Personalization for Response Prediction with Collaborative Data Refinement [79.2400720115588]
We introduce Persona-DB, a simple yet effective framework consisting of a hierarchical construction process to improve generalization across task contexts.<n>In the evaluation of response prediction, Persona-DB demonstrates superior context efficiency in maintaining accuracy with a significantly reduced retrieval size.<n>Our experiments also indicate a marked improvement of over 10% under cold-start scenarios, when users have extremely sparse data.
arXiv Detail & Related papers (2024-02-16T20:20:43Z) - RELIC: Investigating Large Language Model Responses using Self-Consistency [58.63436505595177]
Large Language Models (LLMs) are notorious for blending fact with fiction and generating non-factual content, known as hallucinations.
We propose an interactive system that helps users gain insight into the reliability of the generated text.
arXiv Detail & Related papers (2023-11-28T14:55:52Z) - Integrating Summarization and Retrieval for Enhanced Personalization via
Large Language Models [11.950478880423733]
Personalization is an essential factor in user experience with natural language processing (NLP) systems.
With the emergence of Large Language Models (LLMs), a key question is how to leverage these models to better personalize user experiences.
We propose a novel summary-augmented personalization with task-aware user summaries generated by LLMs.
arXiv Detail & Related papers (2023-10-30T23:40:41Z) - Unlocking the Potential of User Feedback: Leveraging Large Language
Model as User Simulator to Enhance Dialogue System [65.93577256431125]
We propose an alternative approach called User-Guided Response Optimization (UGRO) to combine it with a smaller task-oriented dialogue model.
This approach uses LLM as annotation-free user simulator to assess dialogue responses, combining them with smaller fine-tuned end-to-end TOD models.
Our approach outperforms previous state-of-the-art (SOTA) results.
arXiv Detail & Related papers (2023-06-16T13:04:56Z) - AaKOS: Aspect-adaptive Knowledge-based Opinion Summarization [5.4138734778206]
The rapid growth of information on the Internet has led to an overwhelming amount of opinions and comments on various activities, products, and services.
This makes it difficult and time-consuming for users to process all the available information when making decisions.
We propose an Aspect-adaptive Knowledge-based Opinion Summarization model for product reviews.
arXiv Detail & Related papers (2023-05-26T03:44:35Z) - Large Language Models are Diverse Role-Players for Summarization
Evaluation [82.31575622685902]
A document summary's quality can be assessed by human annotators on various criteria, both objective ones like grammar and correctness, and subjective ones like informativeness, succinctness, and appeal.
Most of the automatic evaluation methods like BLUE/ROUGE may be not able to adequately capture the above dimensions.
We propose a new evaluation framework based on LLMs, which provides a comprehensive evaluation framework by comparing generated text and reference text from both objective and subjective aspects.
arXiv Detail & Related papers (2023-03-27T10:40:59Z) - Adaptive Summaries: A Personalized Concept-based Summarization Approach
by Learning from Users' Feedback [0.0]
This paper proposes an interactive concept-based summarization model, called Adaptive Summaries.
The system learns from users' provided information gradually while interacting with the system by giving feedback in an iterative loop.
It helps users make high-quality summaries based on their preferences by maximizing the user-desired content in the generated summaries.
arXiv Detail & Related papers (2020-12-24T18:27:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.