Domain-level Pairwise Semantic Interaction for Aspect-Based Sentiment
Classification
- URL: http://arxiv.org/abs/2202.10032v1
- Date: Mon, 21 Feb 2022 07:59:17 GMT
- Title: Domain-level Pairwise Semantic Interaction for Aspect-Based Sentiment
Classification
- Authors: Zhenxin Wu and Jiazheng Gong and Kecen Guo and Guanye Liang and
Qingliang Che and Bo Liu
- Abstract summary: We propose a plug-and-play Pairwise Semantic Interaction (PSI) module, which takes pairwise sentences as input.
Different gates are generated to effectively highlight the key semantic features of each sentence.
Finally, the adversarial interaction between the vectors is used to make the semantic representation of two sentences more distinguishable.
- Score: 3.1977819149534987
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Aspect-based sentiment classification (ABSC) is a very challenging subtask of
sentiment analysis (SA) and suffers badly from the class-imbalance. Existing
methods only process sentences independently, without considering the
domain-level relationship between sentences, and fail to provide effective
solutions to the problem of class-imbalance. From an intuitive point of view,
sentences in the same domain often have high-level semantic connections. The
interaction of their high-level semantic features can force the model to
produce better semantic representations, and find the similarities and nuances
between sentences better. Driven by this idea, we propose a plug-and-play
Pairwise Semantic Interaction (PSI) module, which takes pairwise sentences as
input, and obtains interactive information by learning the semantic vectors of
the two sentences. Subsequently, different gates are generated to effectively
highlight the key semantic features of each sentence. Finally, the adversarial
interaction between the vectors is used to make the semantic representation of
two sentences more distinguishable. Experimental results on four ABSC datasets
show that, in most cases, PSI is superior to many competitive state-of-the-art
baselines and can significantly alleviate the problem of class-imbalance.
Related papers
- RankCSE: Unsupervised Sentence Representations Learning via Learning to
Rank [54.854714257687334]
We propose a novel approach, RankCSE, for unsupervised sentence representation learning.
It incorporates ranking consistency and ranking distillation with contrastive learning into a unified framework.
An extensive set of experiments are conducted on both semantic textual similarity (STS) and transfer (TR) tasks.
arXiv Detail & Related papers (2023-05-26T08:27:07Z) - FECANet: Boosting Few-Shot Semantic Segmentation with Feature-Enhanced
Context-Aware Network [48.912196729711624]
Few-shot semantic segmentation is the task of learning to locate each pixel of a novel class in a query image with only a few annotated support images.
We propose a Feature-Enhanced Context-Aware Network (FECANet) to suppress the matching noise caused by inter-class local similarity.
In addition, we propose a novel correlation reconstruction module that encodes extra correspondence relations between foreground and background and multi-scale context semantic features.
arXiv Detail & Related papers (2023-01-19T16:31:13Z) - Semantic-aware Contrastive Learning for More Accurate Semantic Parsing [32.74456368167872]
We propose a semantic-aware contrastive learning algorithm, which can learn to distinguish fine-grained meaning representations.
Experiments on two standard datasets show that our approach achieves significant improvements over MLE baselines.
arXiv Detail & Related papers (2023-01-19T07:04:32Z) - Relational Sentence Embedding for Flexible Semantic Matching [86.21393054423355]
We present Sentence Embedding (RSE), a new paradigm to discover further the potential of sentence embeddings.
RSE is effective and flexible in modeling sentence relations and outperforms a series of state-of-the-art embedding methods.
arXiv Detail & Related papers (2022-12-17T05:25:17Z) - Disentangled Representation Learning for Text-Video Retrieval [51.861423831566626]
Cross-modality interaction is a critical component in Text-Video Retrieval (TVR)
We study the interaction paradigm in depth, where we find that its computation can be split into two terms.
We propose a disentangled framework to capture a sequential and hierarchical representation.
arXiv Detail & Related papers (2022-03-14T13:55:33Z) - Contextualized Semantic Distance between Highly Overlapped Texts [85.1541170468617]
Overlapping frequently occurs in paired texts in natural language processing tasks like text editing and semantic similarity evaluation.
This paper aims to address the issue with a mask-and-predict strategy.
We take the words in the longest common sequence as neighboring words and use masked language modeling (MLM) to predict the distributions on their positions.
Experiments on Semantic Textual Similarity show NDD to be more sensitive to various semantic differences, especially on highly overlapped paired texts.
arXiv Detail & Related papers (2021-10-04T03:59:15Z) - Not All Negatives are Equal: Label-Aware Contrastive Loss for
Fine-grained Text Classification [0.0]
We analyse the contrastive fine-tuning of pre-trained language models on two fine-grained text classification tasks.
We adaptively embed class relationships into a contrastive objective function to help differently weigh the positives and negatives.
We find that Label-aware Contrastive Loss outperforms previous contrastive methods.
arXiv Detail & Related papers (2021-09-12T04:19:17Z) - Dynamic Semantic Matching and Aggregation Network for Few-shot Intent
Detection [69.2370349274216]
Few-shot Intent Detection is challenging due to the scarcity of available annotated utterances.
Semantic components are distilled from utterances via multi-head self-attention.
Our method provides a comprehensive matching measure to enhance representations of both labeled and unlabeled instances.
arXiv Detail & Related papers (2020-10-06T05:16:38Z) - Sequential Sentence Matching Network for Multi-turn Response Selection
in Retrieval-based Chatbots [45.920841134523286]
We propose a matching network, called sequential sentence matching network (S2M), to use the sentence-level semantic information to address the problem.
Firstly, we find that by using the sentence-level semantic information, the network successfully addresses the problem and gets a significant improvement on matching, resulting in a state-of-the-art performance.
arXiv Detail & Related papers (2020-05-16T09:47:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.