Transformer-based Multi-Aspect Modeling for Multi-Aspect Multi-Sentiment
Analysis
- URL: http://arxiv.org/abs/2011.00476v1
- Date: Sun, 1 Nov 2020 11:06:31 GMT
- Title: Transformer-based Multi-Aspect Modeling for Multi-Aspect Multi-Sentiment
Analysis
- Authors: Zhen Wu and Chengcan Ying and Xinyu Dai and Shujian Huang and Jiajun
Chen
- Abstract summary: We propose a novel Transformer-based Multi-aspect Modeling scheme (TMM), which can capture potential relations between multiple aspects and simultaneously detect the sentiment of all aspects in a sentence.
Our method achieves noticeable improvements compared with strong baselines such as BERT and RoBERTa.
- Score: 56.893393134328996
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Aspect-based sentiment analysis (ABSA) aims at analyzing the sentiment of a
given aspect in a sentence. Recently, neural network-based methods have
achieved promising results in existing ABSA datasets. However, these datasets
tend to degenerate to sentence-level sentiment analysis because most sentences
contain only one aspect or multiple aspects with the same sentiment polarity.
To facilitate the research of ABSA, NLPCC 2020 Shared Task 2 releases a new
large-scale Multi-Aspect Multi-Sentiment (MAMS) dataset. In the MAMS dataset,
each sentence contains at least two different aspects with different sentiment
polarities, which makes ABSA more complex and challenging. To address the
challenging dataset, we re-formalize ABSA as a problem of multi-aspect
sentiment analysis, and propose a novel Transformer-based Multi-aspect Modeling
scheme (TMM), which can capture potential relations between multiple aspects
and simultaneously detect the sentiment of all aspects in a sentence.
Experiment results on the MAMS dataset show that our method achieves noticeable
improvements compared with strong baselines such as BERT and RoBERTa, and
finally ranks the 2nd in NLPCC 2020 Shared Task 2 Evaluation.
Related papers
- Towards Robust Multimodal Sentiment Analysis with Incomplete Data [20.75292807497547]
We present an innovative Language-dominated Noise-resistant Learning Network (LNLN) to achieve robust Multimodal Sentiment Analysis (MSA)
LNLN features a dominant modality correction (DMC) module and dominant modality based multimodal learning (DMML) module, which enhances the model's robustness across various noise scenarios.
arXiv Detail & Related papers (2024-09-30T07:14:31Z) - A Framework for Fine-Tuning LLMs using Heterogeneous Feedback [69.51729152929413]
We present a framework for fine-tuning large language models (LLMs) using heterogeneous feedback.
First, we combine the heterogeneous feedback data into a single supervision format, compatible with methods like SFT and RLHF.
Next, given this unified feedback dataset, we extract a high-quality and diverse subset to obtain performance increases.
arXiv Detail & Related papers (2024-08-05T23:20:32Z) - FaiMA: Feature-aware In-context Learning for Multi-domain Aspect-based
Sentiment Analysis [1.606149016749251]
Multi-domain aspect-based sentiment analysis (ABSA) seeks to capture fine-grained sentiment across diverse domains.
We propose a novel framework, Feature-aware In-context Learning for Multi-domain ABSA (FaiMA)
FaiMA is a feature-aware mechanism that facilitates adaptive learning in multi-domain ABSA tasks.
arXiv Detail & Related papers (2024-03-02T02:00:51Z) - A Novel Energy based Model Mechanism for Multi-modal Aspect-Based
Sentiment Analysis [85.77557381023617]
We propose a novel framework called DQPSA for multi-modal sentiment analysis.
PDQ module uses the prompt as both a visual query and a language query to extract prompt-aware visual information.
EPE module models the boundaries pairing of the analysis target from the perspective of an Energy-based Model.
arXiv Detail & Related papers (2023-12-13T12:00:46Z) - Multi-Grained Multimodal Interaction Network for Entity Linking [65.30260033700338]
Multimodal entity linking task aims at resolving ambiguous mentions to a multimodal knowledge graph.
We propose a novel Multi-GraIned Multimodal InteraCtion Network $textbf(MIMIC)$ framework for solving the MEL task.
arXiv Detail & Related papers (2023-07-19T02:11:19Z) - MEMD-ABSA: A Multi-Element Multi-Domain Dataset for Aspect-Based
Sentiment Analysis [23.959356414518957]
We propose a large-scale Multi-Element Multi-Domain dataset (MEMD) that covers the four elements across five domains.
We evaluate generative and non-generative baselines on multiple ABSA subtasks under the open domain setting.
arXiv Detail & Related papers (2023-06-29T14:03:49Z) - Towards Arabic Multimodal Dataset for Sentiment Analysis [0.0]
We design a pipeline that helps building our Arabic Multimodal dataset leveraging both state-of-the-art transformers and feature extraction tools.
We validate our dataset using state-of-the-art transformer-based model dealing with multimodality.
Despite the small size of the outcome dataset, experiments show that Arabic multimodality is very promising.
arXiv Detail & Related papers (2023-06-10T00:13:09Z) - GAMUS: A Geometry-aware Multi-modal Semantic Segmentation Benchmark for
Remote Sensing Data [27.63411386396492]
This paper introduces a new benchmark dataset for multi-modal semantic segmentation based on RGB-Height (RGB-H) data.
The proposed benchmark consists of 1) a large-scale dataset including co-registered RGB and nDSM pairs and pixel-wise semantic labels; 2) a comprehensive evaluation and analysis of existing multi-modal fusion strategies for both convolutional and Transformer-based networks on remote sensing data.
arXiv Detail & Related papers (2023-05-24T09:03:18Z) - A Simple Information-Based Approach to Unsupervised Domain-Adaptive
Aspect-Based Sentiment Analysis [58.124424775536326]
We propose a simple but effective technique based on mutual information to extract their term.
Experiment results show that our proposed method outperforms the state-of-the-art methods for cross-domain ABSA by 4.32% Micro-F1.
arXiv Detail & Related papers (2022-01-29T10:18:07Z) - Bi-Bimodal Modality Fusion for Correlation-Controlled Multimodal
Sentiment Analysis [96.46952672172021]
Bi-Bimodal Fusion Network (BBFN) is a novel end-to-end network that performs fusion on pairwise modality representations.
Model takes two bimodal pairs as input due to known information imbalance among modalities.
arXiv Detail & Related papers (2021-07-28T23:33:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.