Latent Opinions Transfer Network for Target-Oriented Opinion Words
Extraction
- URL: http://arxiv.org/abs/2001.01989v1
- Date: Tue, 7 Jan 2020 11:50:54 GMT
- Title: Latent Opinions Transfer Network for Target-Oriented Opinion Words
Extraction
- Authors: Zhen Wu, Fei Zhao, Xin-Yu Dai, Shujian Huang, Jiajun Chen
- Abstract summary: We propose a novel model to transfer opinions knowledge from resource-rich review sentiment classification datasets to low-resource task TOWE.
Our model achieves better performance compared to other state-of-the-art methods and significantly outperforms the base model without transferring opinions knowledge.
- Score: 63.70885228396077
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Target-oriented opinion words extraction (TOWE) is a new subtask of ABSA,
which aims to extract the corresponding opinion words for a given opinion
target in a sentence. Recently, neural network methods have been applied to
this task and achieve promising results. However, the difficulty of annotation
causes the datasets of TOWE to be insufficient, which heavily limits the
performance of neural models. By contrast, abundant review sentiment
classification data are easily available at online review sites. These reviews
contain substantial latent opinions information and semantic patterns. In this
paper, we propose a novel model to transfer these opinions knowledge from
resource-rich review sentiment classification datasets to low-resource task
TOWE. To address the challenges in the transfer process, we design an effective
transformation method to obtain latent opinions, then integrate them into TOWE.
Extensive experimental results show that our model achieves better performance
compared to other state-of-the-art methods and significantly outperforms the
base model without transferring opinions knowledge. Further analysis validates
the effectiveness of our model.
Related papers
- RDBE: Reasoning Distillation-Based Evaluation Enhances Automatic Essay Scoring [0.0]
Reasoning Distillation-Based Evaluation (RDBE) integrates interpretability to elucidate the rationale behind model scores.
Our experimental results demonstrate the efficacy of RDBE across all scoring rubrics considered in the dataset.
arXiv Detail & Related papers (2024-07-03T05:49:01Z) - Analyzing Persuasive Strategies in Meme Texts: A Fusion of Language Models with Paraphrase Enrichment [0.23020018305241333]
This paper describes our approach to hierarchical multi-label detection of persuasion techniques in meme texts.
The scope of the study encompasses enhancing model performance through innovative training techniques and data augmentation strategies.
arXiv Detail & Related papers (2024-07-01T20:25:20Z) - Evaluating Generative Language Models in Information Extraction as Subjective Question Correction [49.729908337372436]
We propose a new evaluation method, SQC-Score.
Inspired by the principles in subjective question correction, we propose a new evaluation method, SQC-Score.
Results on three information extraction tasks show that SQC-Score is more preferred by human annotators than the baseline metrics.
arXiv Detail & Related papers (2024-04-04T15:36:53Z) - Temporal Output Discrepancy for Loss Estimation-based Active Learning [65.93767110342502]
We present a novel deep active learning approach that queries the oracle for data annotation when the unlabeled sample is believed to incorporate high loss.
Our approach achieves superior performances than the state-of-the-art active learning methods on image classification and semantic segmentation tasks.
arXiv Detail & Related papers (2022-12-20T19:29:37Z) - Explain, Edit, and Understand: Rethinking User Study Design for
Evaluating Model Explanations [97.91630330328815]
We conduct a crowdsourcing study, where participants interact with deception detection models that have been trained to distinguish between genuine and fake hotel reviews.
We observe that for a linear bag-of-words model, participants with access to the feature coefficients during training are able to cause a larger reduction in model confidence in the testing phase when compared to the no-explanation control.
arXiv Detail & Related papers (2021-12-17T18:29:56Z) - Introducing Syntactic Structures into Target Opinion Word Extraction
with Deep Learning [89.64620296557177]
We propose to incorporate the syntactic structures of the sentences into the deep learning models for targeted opinion word extraction.
We also introduce a novel regularization technique to improve the performance of the deep learning models.
The proposed model is extensively analyzed and achieves the state-of-the-art performance on four benchmark datasets.
arXiv Detail & Related papers (2020-10-26T07:13:17Z) - Multi-task Learning of Negation and Speculation for Targeted Sentiment
Classification [15.85111852764517]
We show that targeted sentiment models are not robust to linguistic phenomena, specifically negation and speculation.
We propose a multi-task learning method to incorporate information from syntactic and semantic auxiliary tasks, including negation and speculation scope detection.
We create two challenge datasets to evaluate model performance on negated and speculative samples.
arXiv Detail & Related papers (2020-10-16T11:20:03Z) - Unsupervised Opinion Summarization with Noising and Denoising [85.49169453434554]
We create a synthetic dataset from a corpus of user reviews by sampling a review, pretending it is a summary, and generating noisy versions thereof.
At test time, the model accepts genuine reviews and generates a summary containing salient opinions, treating those that do not reach consensus as noise.
arXiv Detail & Related papers (2020-04-21T16:54:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.