AaKOS: Aspect-adaptive Knowledge-based Opinion Summarization
- URL: http://arxiv.org/abs/2306.05537v1
- Date: Fri, 26 May 2023 03:44:35 GMT
- Title: AaKOS: Aspect-adaptive Knowledge-based Opinion Summarization
- Authors: Guan Wang, Weihua Li, Edmund M-K. Lai, Quan Bai
- Abstract summary: The rapid growth of information on the Internet has led to an overwhelming amount of opinions and comments on various activities, products, and services.
This makes it difficult and time-consuming for users to process all the available information when making decisions.
We propose an Aspect-adaptive Knowledge-based Opinion Summarization model for product reviews.
- Score: 5.4138734778206
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapid growth of information on the Internet has led to an overwhelming
amount of opinions and comments on various activities, products, and services.
This makes it difficult and time-consuming for users to process all the
available information when making decisions. Text summarization, a Natural
Language Processing (NLP) task, has been widely explored to help users quickly
retrieve relevant information by generating short and salient content from long
or multiple documents. Recent advances in pre-trained language models, such as
ChatGPT, have demonstrated the potential of Large Language Models (LLMs) in
text generation. However, LLMs require massive amounts of data and resources
and are challenging to implement as offline applications. Furthermore, existing
text summarization approaches often lack the ``adaptive" nature required to
capture diverse aspects in opinion summarization, which is particularly
detrimental to users with specific requirements or preferences. In this paper,
we propose an Aspect-adaptive Knowledge-based Opinion Summarization model for
product reviews, which effectively captures the adaptive nature required for
opinion summarization. The model generates aspect-oriented summaries given a
set of reviews for a particular product, efficiently providing users with
useful information on specific aspects they are interested in, ensuring the
generated summaries are more personalized and informative. Extensive
experiments have been conducted using real-world datasets to evaluate the
proposed model. The results demonstrate that our model outperforms
state-of-the-art approaches and is adaptive and efficient in generating
summaries that focus on particular aspects, enabling users to make
well-informed decisions and catering to their diverse interests and
preferences.
Related papers
- LLM-assisted Explicit and Implicit Multi-interest Learning Framework for Sequential Recommendation [50.98046887582194]
We propose an explicit and implicit multi-interest learning framework to model user interests on two levels: behavior and semantics.
The proposed EIMF framework effectively and efficiently combines small models with LLM to improve the accuracy of multi-interest modeling.
arXiv Detail & Related papers (2024-11-14T13:00:23Z) - LFOSum: Summarizing Long-form Opinions with Large Language Models [7.839083566878183]
This paper introduces (1) a new dataset of long-form user reviews, each entity comprising over a thousand reviews, (2) two training-free LLM-based summarization approaches that scale to long inputs, and (3) automatic evaluation metrics.
Our dataset of user reviews is paired with in-depth and unbiased critical summaries by domain experts, serving as a reference for evaluation.
Our evaluation reveals that LLMs still face challenges in balancing sentiment and format adherence in long-form summaries, though open-source models can narrow the gap when relevant information is retrieved in a focused manner.
arXiv Detail & Related papers (2024-10-16T20:52:39Z) - UserSumBench: A Benchmark Framework for Evaluating User Summarization Approaches [25.133460380551327]
Large language models (LLMs) have shown remarkable capabilities in generating user summaries from a long list of raw user activity data.
These summaries capture essential user information such as preferences and interests, and are invaluable for personalization applications.
However, the development of new summarization techniques is hindered by the lack of ground-truth labels, the inherent subjectivity of user summaries, and human evaluation.
arXiv Detail & Related papers (2024-08-30T01:56:57Z) - Leveraging Large Language Models for Mobile App Review Feature Extraction [4.879919005707447]
This study explores the hypothesis that encoder-only large language models can enhance feature extraction from mobile app reviews.
By leveraging crowdsourced annotations from an industrial context, we redefine feature extraction as a supervised token classification task.
Empirical evaluations demonstrate that this method improves the precision and recall of extracted features and enhances performance efficiency.
arXiv Detail & Related papers (2024-08-02T07:31:57Z) - Towards Unified Multi-Modal Personalization: Large Vision-Language Models for Generative Recommendation and Beyond [87.1712108247199]
Our goal is to establish a Unified paradigm for Multi-modal Personalization systems (UniMP)
We develop a generic and personalization generative framework, that can handle a wide range of personalized needs.
Our methodology enhances the capabilities of foundational language models for personalized tasks.
arXiv Detail & Related papers (2024-03-15T20:21:31Z) - Bayesian Preference Elicitation with Language Models [82.58230273253939]
We introduce OPEN, a framework that uses BOED to guide the choice of informative questions and an LM to extract features.
In user studies, we find that OPEN outperforms existing LM- and BOED-based methods for preference elicitation.
arXiv Detail & Related papers (2024-03-08T18:57:52Z) - Exploring Large Language Model for Graph Data Understanding in Online
Job Recommendations [63.19448893196642]
We present a novel framework that harnesses the rich contextual information and semantic representations provided by large language models to analyze behavior graphs.
By leveraging this capability, our framework enables personalized and accurate job recommendations for individual users.
arXiv Detail & Related papers (2023-07-10T11:29:41Z) - Generative Counterfactuals for Neural Networks via Attribute-Informed
Perturbation [51.29486247405601]
We design a framework to generate counterfactuals for raw data instances with the proposed Attribute-Informed Perturbation (AIP)
By utilizing generative models conditioned with different attributes, counterfactuals with desired labels can be obtained effectively and efficiently.
Experimental results on real-world texts and images demonstrate the effectiveness, sample quality as well as efficiency of our designed framework.
arXiv Detail & Related papers (2021-01-18T08:37:13Z) - Adaptive Summaries: A Personalized Concept-based Summarization Approach
by Learning from Users' Feedback [0.0]
This paper proposes an interactive concept-based summarization model, called Adaptive Summaries.
The system learns from users' provided information gradually while interacting with the system by giving feedback in an iterative loop.
It helps users make high-quality summaries based on their preferences by maximizing the user-desired content in the generated summaries.
arXiv Detail & Related papers (2020-12-24T18:27:50Z) - Read what you need: Controllable Aspect-based Opinion Summarization of
Tourist Reviews [23.7107052882747]
We argue the need and propose a solution for generating personalized aspect-based opinion summaries from online tourist reviews.
We let our readers decide and control several attributes of the summary such as the length and specific aspects of interest.
Specifically, we take an unsupervised approach to extract coherent aspects from tourist reviews posted on TripAdvisor.
arXiv Detail & Related papers (2020-06-08T15:03:38Z) - Few-Shot Learning for Opinion Summarization [117.70510762845338]
Opinion summarization is the automatic creation of text reflecting subjective information expressed in multiple documents.
In this work, we show that even a handful of summaries is sufficient to bootstrap generation of the summary text.
Our approach substantially outperforms previous extractive and abstractive methods in automatic and human evaluation.
arXiv Detail & Related papers (2020-04-30T15:37:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.