Span-level Bidirectional Cross-attention Framework for Aspect Sentiment
Triplet Extraction
- URL: http://arxiv.org/abs/2204.12674v1
- Date: Wed, 27 Apr 2022 02:55:43 GMT
- Title: Span-level Bidirectional Cross-attention Framework for Aspect Sentiment
Triplet Extraction
- Authors: Yuqi Chen, Keming Chen, Xian Sun, Zequn Zhang
- Abstract summary: Aspect Sentiment Triplet Extraction (ASTE) is a new fine-grained sentiment analysis task that aims to extract triplets of aspect terms, sentiments, and opinion terms from review sentences.
We propose a span-level bidirectional cross-attention framework for ASTE.
Our framework significantly outperforms state-of-the-art methods, achieving better performance in predicting triplets with multi-token entities.
- Score: 10.522014946035664
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Aspect Sentiment Triplet Extraction (ASTE) is a new fine-grained sentiment
analysis task that aims to extract triplets of aspect terms, sentiments, and
opinion terms from review sentences. Recently, span-level models achieve
gratifying results on ASTE task by taking advantage of whole span predictions.
However, all the spans generated by these methods inevitably share at least one
token with some others, and these method suffer from the similarity of these
spans due to their similar distributions. Moreover, since either the aspect
term or opinion term can trigger a sentiment triplet, it is challenging to make
use of the information more comprehensively and adequately. To address these
concerns, we propose a span-level bidirectional cross-attention framework.
Specifically, we design a similar span separation loss to detach the spans with
shared tokens and a bidirectional cross-attention structure that consists of
aspect and opinion decoders to decode the span-level representations in both
aspect-to-opinion and opinion-to-aspect directions. With differentiated span
representations and bidirectional decoding structure, our model can extract
sentiment triplets more precisely and efficiently. Experimental results show
that our framework significantly outperforms state-of-the-art methods,
achieving better performance in predicting triplets with multi-token entities
and extracting triplets in sentences with multi-triplets.
Related papers
- Multi-threshold Deep Metric Learning for Facial Expression Recognition [60.26967776920412]
We present the multi-threshold deep metric learning technique, which avoids the difficult threshold validation.
We find that each threshold of the triplet loss intrinsically determines a distinctive distribution of inter-class variations.
It makes the embedding layer, which is composed of a set of slices, a more informative and discriminative feature.
arXiv Detail & Related papers (2024-06-24T08:27:31Z) - GroupContrast: Semantic-aware Self-supervised Representation Learning for 3D Understanding [66.5538429726564]
Self-supervised 3D representation learning aims to learn effective representations from large-scale unlabeled point clouds.
We propose GroupContrast, a novel approach that combines segment grouping and semantic-aware contrastive learning.
arXiv Detail & Related papers (2024-03-14T17:59:59Z) - Collaborative Group: Composed Image Retrieval via Consensus Learning from Noisy Annotations [67.92679668612858]
We propose the Consensus Network (Css-Net), inspired by the psychological concept that groups outperform individuals.
Css-Net comprises two core components: (1) a consensus module with four diverse compositors, each generating distinct image-text embeddings; and (2) a Kullback-Leibler divergence loss that encourages learning of inter-compositor interactions.
On benchmark datasets, particularly FashionIQ, Css-Net demonstrates marked improvements. Notably, it achieves significant recall gains, with a 2.77% increase in R@10 and 6.67% boost in R@50, underscoring its
arXiv Detail & Related papers (2023-06-03T11:50:44Z) - PV2TEA: Patching Visual Modality to Textual-Established Information
Extraction [59.76117533540496]
We patch the visual modality to the textual-established attribute information extractor.
PV2TEA is an encoder-decoder architecture equipped with three bias reduction schemes.
Empirical results on real-world e-Commerce datasets demonstrate up to 11.74% absolute (20.97% relatively) F1 increase over unimodal baselines.
arXiv Detail & Related papers (2023-06-01T05:39:45Z) - Understanding and Constructing Latent Modality Structures in Multi-modal
Representation Learning [53.68371566336254]
We argue that the key to better performance lies in meaningful latent modality structures instead of perfect modality alignment.
Specifically, we design 1) a deep feature separation loss for intra-modality regularization; 2) a Brownian-bridge loss for inter-modality regularization; and 3) a geometric consistency loss for both intra- and inter-modality regularization.
arXiv Detail & Related papers (2023-03-10T14:38:49Z) - PASTE: A Tagging-Free Decoding Framework Using Pointer Networks for
Aspect Sentiment Triplet Extraction [12.921737393688245]
Aspect Sentiment Triplet Extraction (ASTE) deals with extracting opinion triplets, consisting of an opinion target or aspect, its associated sentiment, and the corresponding opinion term/span.
We adapt an encoder-decoder architecture with a Pointer Network-based decoding framework that generates an entire opinion triplet at each time step.
arXiv Detail & Related papers (2021-10-10T13:39:39Z) - Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction [25.984894351763945]
Aspect Sentiment Triplet Extraction (ASTE) is the most recent subtask of ABSA.
Recent models perform the triplet extraction in an end-to-end manner but heavily rely on the interactions between each target word and opinion word.
Our proposed span-level approach explicitly considers the interaction between the whole spans of targets and opinions when predicting their sentiment relation.
arXiv Detail & Related papers (2021-07-26T13:47:31Z) - Semantic and Syntactic Enhanced Aspect Sentiment Triplet Extraction [18.331779474247323]
Aspect Sentiment Triplet Extraction aims to extract triplets from sentences, where each triplet includes an entity, its associated sentiment, and the opinion span explaining the reason for the sentiment.
We propose a Semantic and Syntactic Enhanced aspect Sentiment triplet Extraction model (S3E2) to fully exploit the syntactic and semantic relationships between the triplet elements and jointly extract them.
arXiv Detail & Related papers (2021-06-07T03:16:51Z) - Bidirectional Machine Reading Comprehension for Aspect Sentiment Triplet
Extraction [8.208671244754317]
Aspect sentiment triplet extraction (ASTE) is an emerging task in fine-grained opinion mining.
We transform ASTE task into a multi-turn machine reading comprehension (MTMRC) task.
We propose a bidirectional MRC (BMRC) framework to address this challenge.
arXiv Detail & Related papers (2021-03-13T09:30:47Z) - First Target and Opinion then Polarity: Enhancing Target-opinion
Correlation for Aspect Sentiment Triplet Extraction [45.82241446769157]
Aspect Sentiment Triplet Extraction (ASTE) aims to extract triplets from a sentence, including target entities, associated sentiment polarities, and opinion spans which rationalize the polarities.
Existing methods are short on building correlation between target-opinion pairs, and neglect the mutual interference among different sentiment triplets.
We propose a novel two-stage method which enhances the correlation between targets and opinions through sequence tagging.
arXiv Detail & Related papers (2021-02-17T03:28:17Z) - Position-Aware Tagging for Aspect Sentiment Triplet Extraction [37.76744150888183]
Aspect Sentiment Triplet Extraction (ASTE) is the task of extracting the triplets of target entities, their associated sentiment, and opinion spans explaining the reason for the sentiment.
Our observation is that the three elements within a triplet are highly related to each other, and this motivates us to build a joint model to extract such triplets.
We propose the first end-to-end model with a novel position-aware tagging scheme that is capable of jointly extracting the triplets.
arXiv Detail & Related papers (2020-10-06T10:40:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.