Aspect and Opinion Aware Abstractive Review Summarization with
Reinforced Hard Typed Decoder
- URL: http://arxiv.org/abs/2004.05755v1
- Date: Mon, 13 Apr 2020 03:35:29 GMT
- Title: Aspect and Opinion Aware Abstractive Review Summarization with
Reinforced Hard Typed Decoder
- Authors: Yufei Tian, Jianfei Yu, Jing Jiang
- Abstract summary: We propose a two-stage reinforcement learning approach, which first predicts the output word type from the three types, and then leverages the predicted word type to generate the final word distribution.
Results on two Amazon product review datasets demonstrate that our method can consistently outperform several strong baseline approaches based on ROUGE scores.
- Score: 18.894655634326423
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study abstractive review summarization.Observing that
review summaries often consist of aspect words, opinion words and context
words, we propose a two-stage reinforcement learning approach, which first
predicts the output word type from the three types, and then leverages the
predicted word type to generate the final word distribution.Experimental
results on two Amazon product review datasets demonstrate that our method can
consistently outperform several strong baseline approaches based on ROUGE
scores.
Related papers
- ASTE Transformer Modelling Dependencies in Aspect-Sentiment Triplet Extraction [2.07180164747172]
Aspect-Sentiment Triplet Extraction (ASTE) is a recently proposed task that consists in extracting (aspect phrase, opinion phrase, sentiment polarity) triples from a given sentence.
Recent state-of-the-art methods approach this task by first extracting all possible spans from a given sentence.
arXiv Detail & Related papers (2024-09-23T16:49:47Z) - Hierarchical Indexing for Retrieval-Augmented Opinion Summarization [60.5923941324953]
We propose a method for unsupervised abstractive opinion summarization that combines the attributability and scalability of extractive approaches with the coherence and fluency of Large Language Models (LLMs)
Our method, HIRO, learns an index structure that maps sentences to a path through a semantically organized discrete hierarchy.
At inference time, we populate the index and use it to identify and retrieve clusters of sentences containing popular opinions from input reviews.
arXiv Detail & Related papers (2024-03-01T10:38:07Z) - How Well Do Text Embedding Models Understand Syntax? [50.440590035493074]
The ability of text embedding models to generalize across a wide range of syntactic contexts remains under-explored.
Our findings reveal that existing text embedding models have not sufficiently addressed these syntactic understanding challenges.
We propose strategies to augment the generalization ability of text embedding models in diverse syntactic scenarios.
arXiv Detail & Related papers (2023-11-14T08:51:00Z) - Simple Yet Effective Synthetic Dataset Construction for Unsupervised
Opinion Summarization [28.52201592634964]
We propose two simple yet effective unsupervised approaches to generate both aspect-specific and general opinion summaries.
Our first approach, Seed Words Based Leave-One-Out (SW-LOO), identifies aspect-related portions of reviews simply by exact-matching aspect seed words.
Our second approach, Natural Language Inference Based Leave-One-Out (NLI-LOO), identifies aspect-related sentences utilizing an NLI model in a more general setting without using seed words.
arXiv Detail & Related papers (2023-03-21T08:08:04Z) - Prompted Opinion Summarization with GPT-3.5 [115.95460650578678]
We show that GPT-3.5 models achieve very strong performance in human evaluation.
We argue that standard evaluation metrics do not reflect this, and introduce three new metrics targeting faithfulness, factuality, and genericity.
arXiv Detail & Related papers (2022-11-29T04:06:21Z) - Controllable Abstractive Dialogue Summarization with Sketch Supervision [56.59357883827276]
Our model achieves state-of-the-art performance on the largest dialogue summarization corpus SAMSum, with as high as 50.79 in ROUGE-L score.
arXiv Detail & Related papers (2021-05-28T19:05:36Z) - The Factual Inconsistency Problem in Abstractive Text Summarization: A
Survey [25.59111855107199]
neural encoder-decoder models pioneered by Seq2Seq framework have been proposed to achieve the goal of generating more abstractive summaries.
At a high level, such neural models can freely generate summaries without any constraint on the words or phrases used.
However, the neural model's abstraction ability is a double-edged sword.
arXiv Detail & Related papers (2021-04-30T08:46:13Z) - Hierarchical Bi-Directional Self-Attention Networks for Paper Review
Rating Recommendation [81.55533657694016]
We propose a Hierarchical bi-directional self-attention Network framework (HabNet) for paper review rating prediction and recommendation.
Specifically, we leverage the hierarchical structure of the paper reviews with three levels of encoders: sentence encoder (level one), intra-review encoder (level two) and inter-review encoder (level three)
We are able to identify useful predictors to make the final acceptance decision, as well as to help discover the inconsistency between numerical review ratings and text sentiment conveyed by reviewers.
arXiv Detail & Related papers (2020-11-02T08:07:50Z) - A Multi-task Learning Framework for Opinion Triplet Extraction [24.983625011760328]
We present a novel view of ABSA as an opinion triplet extraction task.
We propose a multi-task learning framework to jointly extract aspect terms and opinion terms.
We evaluate the proposed framework on four SemEval benchmarks for ASBA.
arXiv Detail & Related papers (2020-10-04T08:31:54Z) - A Unified Dual-view Model for Review Summarization and Sentiment
Classification with Inconsistency Loss [51.448615489097236]
Acquiring accurate summarization and sentiment from user reviews is an essential component of modern e-commerce platforms.
We propose a novel dual-view model that jointly improves the performance of these two tasks.
Experiment results on four real-world datasets from different domains demonstrate the effectiveness of our model.
arXiv Detail & Related papers (2020-06-02T13:34:11Z) - SueNes: A Weakly Supervised Approach to Evaluating Single-Document
Summarization via Negative Sampling [25.299937353444854]
We present a proof-of-concept study to a weakly supervised summary evaluation approach without the presence of reference summaries.
Massive data in existing summarization datasets are transformed for training by pairing documents with corrupted reference summaries.
arXiv Detail & Related papers (2020-05-13T15:40:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.