MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction
- URL: http://arxiv.org/abs/2305.12627v1
- Date: Mon, 22 May 2023 01:32:50 GMT
- Title: MvP: Multi-view Prompting Improves Aspect Sentiment Tuple Prediction
- Authors: Zhibin Gou, Qingyan Guo, Yujiu Yang
- Abstract summary: We propose Multi-view Prompting (MvP) that aggregates sentiment elements generated in different orders.
MvP can naturally model multi-view and multi-task as permutations and combinations of elements.
Extensive experiments show that MvP significantly advances the state-of-the-art performance on 10 datasets of 4 benchmark tasks.
- Score: 14.177875807409434
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative methods greatly promote aspect-based sentiment analysis via
generating a sequence of sentiment elements in a specified format. However,
existing studies usually predict sentiment elements in a fixed order, which
ignores the effect of the interdependence of the elements in a sentiment tuple
and the diversity of language expression on the results. In this work, we
propose Multi-view Prompting (MvP) that aggregates sentiment elements generated
in different orders, leveraging the intuition of human-like problem-solving
processes from different views. Specifically, MvP introduces element order
prompts to guide the language model to generate multiple sentiment tuples, each
with a different element order, and then selects the most reasonable tuples by
voting. MvP can naturally model multi-view and multi-task as permutations and
combinations of elements, respectively, outperforming previous task-specific
designed methods on multiple ABSA tasks with a single model. Extensive
experiments show that MvP significantly advances the state-of-the-art
performance on 10 datasets of 4 benchmark tasks, and performs quite effectively
in low-resource settings. Detailed evaluation verified the effectiveness,
flexibility, and cross-task transferability of MvP.
Related papers
- A Multi-Task, Multi-Modal Approach for Predicting Categorical and
Dimensional Emotions [0.0]
We propose a multi-task, multi-modal system that predicts categorical and dimensional emotions.
Results emphasise the importance of cross-regularisation between the two types of emotions.
arXiv Detail & Related papers (2023-12-31T16:48:03Z) - Exploiting Modality-Specific Features For Multi-Modal Manipulation
Detection And Grounding [54.49214267905562]
We construct a transformer-based framework for multi-modal manipulation detection and grounding tasks.
Our framework simultaneously explores modality-specific features while preserving the capability for multi-modal alignment.
We propose an implicit manipulation query (IMQ) that adaptively aggregates global contextual cues within each modality.
arXiv Detail & Related papers (2023-09-22T06:55:41Z) - VLT: Vision-Language Transformer and Query Generation for Referring
Segmentation [31.051579752237746]
We propose a framework for referring segmentation to facilitate deep interactions among multi-modal information.
We introduce masked contrastive learning to narrow down the features of different expressions for the same target object.
The proposed approach is lightweight and achieves new state-of-the-art referring segmentation results consistently on five datasets.
arXiv Detail & Related papers (2022-10-28T03:36:07Z) - Instruction Tuning for Few-Shot Aspect-Based Sentiment Analysis [72.9124467710526]
generative approaches have been proposed to extract all four elements as (one or more) quadruplets from text as a single task.
We propose a unified framework for solving ABSA, and the associated sub-tasks to improve the performance in few-shot scenarios.
arXiv Detail & Related papers (2022-10-12T23:38:57Z) - A Generative Language Model for Few-shot Aspect-Based Sentiment Analysis [90.24921443175514]
We focus on aspect-based sentiment analysis, which involves extracting aspect term, category, and predicting their corresponding polarities.
We propose to reformulate the extraction and prediction tasks into the sequence generation task, using a generative language model with unidirectional attention.
Our approach outperforms the previous state-of-the-art (based on BERT) on average performance by a large margins in few-shot and full-shot settings.
arXiv Detail & Related papers (2022-04-11T18:31:53Z) - Improving Multi-task Generalization Ability for Neural Text Matching via
Prompt Learning [54.66399120084227]
Recent state-of-the-art neural text matching models (PLMs) are hard to generalize to different tasks.
We adopt a specialization-generalization training strategy and refer to it as Match-Prompt.
In specialization stage, descriptions of different matching tasks are mapped to only a few prompt tokens.
In generalization stage, text matching model explores the essential matching signals by being trained on diverse multiple matching tasks.
arXiv Detail & Related papers (2022-04-06T11:01:08Z) - Rethinking End-to-End Evaluation of Decomposable Tasks: A Case Study on
Spoken Language Understanding [101.24748444126982]
Decomposable tasks are complex and comprise of a hierarchy of sub-tasks.
Existing benchmarks, however, typically hold out examples for only the surface-level sub-task.
We propose a framework to construct robust test sets using coordinate ascent over sub-task specific utility functions.
arXiv Detail & Related papers (2021-06-29T02:53:59Z) - Deep Multi-Modal Sets [29.983311598563542]
Deep Multi-Modal Sets is a technique that represents a collection of features as an unordered set rather than one long ever-growing fixed-size vector.
We demonstrate a scalable, multi-modal framework that reasons over different modalities to learn various types of tasks.
arXiv Detail & Related papers (2020-03-03T15:48:44Z) - Multi-level Head-wise Match and Aggregation in Transformer for Textual
Sequence Matching [87.97265483696613]
We propose a new approach to sequence pair matching with Transformer, by learning head-wise matching representations on multiple levels.
Experiments show that our proposed approach can achieve new state-of-the-art performance on multiple tasks.
arXiv Detail & Related papers (2020-01-20T20:02:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.