Extractive and Abstractive Sentence Labelling of Sentiment-bearing
Topics
- URL: http://arxiv.org/abs/2108.12822v1
- Date: Sun, 29 Aug 2021 11:08:39 GMT
- Title: Extractive and Abstractive Sentence Labelling of Sentiment-bearing
Topics
- Authors: Mohamad Hardyman Barawi, Chenghua Lin, Advaith Siddharthan, Yinbin Liu
- Abstract summary: This paper tackles the problem of automatically labelling sentiment-bearing topics with descriptive sentence labels.
We propose two approaches to the problem, one extractive and the other abstractive.
We conclude that abstractive methods can effectively synthesise the rich information contained in sentiment-bearing topics.
- Score: 5.014332673843021
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper tackles the problem of automatically labelling sentiment-bearing
topics with descriptive sentence labels. We propose two approaches to the
problem, one extractive and the other abstractive. Both approaches rely on a
novel mechanism to automatically learn the relevance of each sentence in a
corpus to sentiment-bearing topics extracted from that corpus. The extractive
approach uses a sentence ranking algorithm for label selection which for the
first time jointly optimises topic--sentence relevance as well as
aspect--sentiment co-coverage. The abstractive approach instead addresses
aspect--sentiment co-coverage by using sentence fusion to generate a sentential
label that includes relevant content from multiple sentences. To our knowledge,
we are the first to study the problem of labelling sentiment-bearing topics.
Our experimental results on three real-world datasets show that both the
extractive and abstractive approaches outperform four strong baselines in terms
of facilitating topic understanding and interpretation. In addition, when
comparing extractive and abstractive labels, our evaluation shows that our best
performing abstractive method is able to provide more topic information
coverage in fewer words, at the cost of generating less grammatical labels than
the extractive method. We conclude that abstractive methods can effectively
synthesise the rich information contained in sentiment-bearing topics.
Related papers
- Salience Allocation as Guidance for Abstractive Summarization [61.31826412150143]
We propose a novel summarization approach with a flexible and reliable salience guidance, namely SEASON (SaliencE Allocation as Guidance for Abstractive SummarizatiON)
SEASON utilizes the allocation of salience expectation to guide abstractive summarization and adapts well to articles in different abstractiveness.
arXiv Detail & Related papers (2022-10-22T02:13:44Z) - Textual Entailment Recognition with Semantic Features from Empirical
Text Representation [60.31047947815282]
A text entails a hypothesis if and only if the true value of the hypothesis follows the text.
In this paper, we propose a novel approach to identifying the textual entailment relationship between text and hypothesis.
We employ an element-wise Manhattan distance vector-based feature that can identify the semantic entailment relationship between the text-hypothesis pair.
arXiv Detail & Related papers (2022-10-18T10:03:51Z) - A General Contextualized Rewriting Framework for Text Summarization [15.311467109946571]
Exiting rewriting systems take each extractive sentence as the only input, which is relatively focused but can lose necessary background knowledge and discourse context.
We formalize contextualized rewriting as a seq2seq with group-tag alignments, identifying extractive sentences through content-based addressing.
Results show that our approach significantly outperforms non-contextualized rewriting systems without requiring reinforcement learning.
arXiv Detail & Related papers (2022-07-13T03:55:57Z) - A Survey on Neural Abstractive Summarization Methods and Factual
Consistency of Summarization [18.763290930749235]
summarization is the process of shortening a set of textual data computationally, to create a subset (a summary)
Existing summarization methods can be roughly divided into two types: extractive and abstractive.
An extractive summarizer explicitly selects text snippets from the source document, while an abstractive summarizer generates novel text snippets to convey the most salient concepts prevalent in the source.
arXiv Detail & Related papers (2022-04-20T14:56:36Z) - Better Highlighting: Creating Sub-Sentence Summary Highlights [40.46639471959677]
We present a new method to produce self-contained highlights that are understandable on their own to avoid confusion.
Our method combines determinantal point processes and deep contextualized representations to identify an optimal set of sub-sentence segments.
To demonstrate the flexibility and modeling power of our method, we conduct extensive experiments on summarization datasets.
arXiv Detail & Related papers (2020-10-20T18:57:42Z) - Weakly-Supervised Aspect-Based Sentiment Analysis via Joint
Aspect-Sentiment Topic Embedding [71.2260967797055]
We propose a weakly-supervised approach for aspect-based sentiment analysis.
We learn sentiment, aspect> joint topic embeddings in the word embedding space.
We then use neural models to generalize the word-level discriminative information.
arXiv Detail & Related papers (2020-10-13T21:33:24Z) - Understanding Points of Correspondence between Sentences for Abstractive
Summarization [39.7404761923196]
We present an investigation into fusing sentences drawn from a document by introducing the notion of points of correspondence.
We create a dataset containing the documents, source and fusion sentences, and human annotations of points of correspondence between sentences.
arXiv Detail & Related papers (2020-06-10T02:42:38Z) - TRIE: End-to-End Text Reading and Information Extraction for Document
Understanding [56.1416883796342]
We propose a unified end-to-end text reading and information extraction network.
multimodal visual and textual features of text reading are fused for information extraction.
Our proposed method significantly outperforms the state-of-the-art methods in both efficiency and accuracy.
arXiv Detail & Related papers (2020-05-27T01:47:26Z) - Segmenting Scientific Abstracts into Discourse Categories: A Deep
Learning-Based Approach for Sparse Labeled Data [8.635930195821265]
We train a deep neural network on structured abstracts from PubMed to fine-tune it on a small hand-labeled corpus of computer science papers.
Our method appears to be a promising solution to the automatic segmentation of abstracts, where the data is sparse.
arXiv Detail & Related papers (2020-05-11T20:21:25Z) - Extractive Summarization as Text Matching [123.09816729675838]
This paper creates a paradigm shift with regard to the way we build neural extractive summarization systems.
We formulate the extractive summarization task as a semantic text matching problem.
We have driven the state-of-the-art extractive result on CNN/DailyMail to a new level (44.41 in ROUGE-1)
arXiv Detail & Related papers (2020-04-19T08:27:57Z) - At Which Level Should We Extract? An Empirical Analysis on Extractive
Document Summarization [110.54963847339775]
We show that unnecessity and redundancy issues exist when extracting full sentences.
We propose extracting sub-sentential units based on the constituency parsing tree.
arXiv Detail & Related papers (2020-04-06T13:35:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.