Emotion-cause pair extraction method based on multi-granularity information and multi-module interaction
- URL: http://arxiv.org/abs/2404.06812v1
- Date: Wed, 10 Apr 2024 08:00:26 GMT
- Title: Emotion-cause pair extraction method based on multi-granularity information and multi-module interaction
- Authors: Mingrui Fu, Weijiang Li,
- Abstract summary: The purpose of emotion-cause pair extraction is to extract the pair of emotion clauses and cause clauses.
Existing models do not adequately address the emotion and cause-induced locational imbalance of samples.
We propose an end-to-end multitasking model (MM-ECPE) based on shared interaction between GRU, knowledge graph and transformer modules.
- Score: 0.6577148087211809
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The purpose of emotion-cause pair extraction is to extract the pair of emotion clauses and cause clauses. On the one hand, the existing methods do not take fully into account the relationship between the emotion extraction of two auxiliary tasks. On the other hand, the existing two-stage model has the problem of error propagation. In addition, existing models do not adequately address the emotion and cause-induced locational imbalance of samples. To solve these problems, an end-to-end multitasking model (MM-ECPE) based on shared interaction between GRU, knowledge graph and transformer modules is proposed. Furthermore, based on MM-ECPE, in order to use the encoder layer to better solve the problem of imbalanced distribution of clause distances between clauses and emotion clauses, we propose a novel encoding based on BERT, sentiment lexicon, and position-aware interaction module layer of emotion motif pair retrieval model (MM-ECPE(BERT)). The model first fully models the interaction between different tasks through the multi-level sharing module, and mines the shared information between emotion-cause pair extraction and the emotion extraction and cause extraction. Second, to solve the imbalanced distribution of emotion clauses and cause clauses problem, suitable labels are screened out according to the knowledge graph path length and task-specific features are constructed so that the model can focus on extracting pairs with corresponding emotion-cause relationships. Experimental results on the ECPE benchmark dataset show that the proposed model achieves good performance, especially on position-imbalanced samples.
Related papers
- A Novel Energy based Model Mechanism for Multi-modal Aspect-Based
Sentiment Analysis [85.77557381023617]
We propose a novel framework called DQPSA for multi-modal sentiment analysis.
PDQ module uses the prompt as both a visual query and a language query to extract prompt-aware visual information.
EPE module models the boundaries pairing of the analysis target from the perspective of an Energy-based Model.
arXiv Detail & Related papers (2023-12-13T12:00:46Z) - Single-Stage Visual Relationship Learning using Conditional Queries [60.90880759475021]
TraCQ is a new formulation for scene graph generation that avoids the multi-task learning problem and the entity pair distribution.
We employ a DETR-based encoder-decoder conditional queries to significantly reduce the entity label space as well.
Experimental results show that TraCQ not only outperforms existing single-stage scene graph generation methods, it also beats many state-of-the-art two-stage methods on the Visual Genome dataset.
arXiv Detail & Related papers (2023-06-09T06:02:01Z) - Object Segmentation by Mining Cross-Modal Semantics [68.88086621181628]
We propose a novel approach by mining the Cross-Modal Semantics to guide the fusion and decoding of multimodal features.
Specifically, we propose a novel network, termed XMSNet, consisting of (1) all-round attentive fusion (AF), (2) coarse-to-fine decoder (CFD), and (3) cross-layer self-supervision.
arXiv Detail & Related papers (2023-05-17T14:30:11Z) - Pair-Based Joint Encoding with Relational Graph Convolutional Networks
for Emotion-Cause Pair Extraction [25.101027960035147]
Methods sequentially encode features with a specified order. They first encode emotion and cause features for clause extraction and then combine them for pair extraction.
This lead to an imbalance in inter-task feature interaction where features extracted later have no direct contact with the former.
We propose a novel Pair-Based Joint.
Network, which generates pairs and clauses simultaneously in a joint feature manner to model the causal clauses.
Experiments show PBN achieves state-of-the-art performance on the Chinese benchmark corpus.
arXiv Detail & Related papers (2022-12-04T15:24:14Z) - Joint Alignment of Multi-Task Feature and Label Spaces for Emotion Cause
Pair Extraction [36.123715709125015]
Emotion cause pair extraction (ECPE) is one of the derived subtasks of emotion cause analysis (ECA)
ECPE shares rich inter-related features with emotion extraction (EE) and cause extraction (CE)
arXiv Detail & Related papers (2022-09-09T04:06:27Z) - Multi-Granularity Semantic Aware Graph Model for Reducing Position Bias
in Emotion-Cause Pair Extraction [23.93696773727978]
The Emotion-Cause Pair Extraction (ECPE) task aims to extract emotions and causes as pairs from documents.
Existing methods have set a fixed size window to capture relations between neighboring clauses.
We propose a novel textbfMulti-textbfGranularity textbfSemantic textbfAware textbfGraph model (MGSAG) to incorporate fine-grained and coarse-grained semantic features jointly.
arXiv Detail & Related papers (2022-05-04T15:39:46Z) - Bi-Bimodal Modality Fusion for Correlation-Controlled Multimodal
Sentiment Analysis [96.46952672172021]
Bi-Bimodal Fusion Network (BBFN) is a novel end-to-end network that performs fusion on pairwise modality representations.
Model takes two bimodal pairs as input due to known information imbalance among modalities.
arXiv Detail & Related papers (2021-07-28T23:33:42Z) - A Dual-Questioning Attention Network for Emotion-Cause Pair Extraction
with Context Awareness [3.5630018935736576]
We propose a Dual-Questioning Attention Network for emotion-cause pair extraction.
Specifically, we question candidate emotions and causes to the context independently through attention networks for a contextual and semantical answer.
Empirical results show that our method performs better than baselines in terms of multiple evaluation metrics.
arXiv Detail & Related papers (2021-04-15T03:47:04Z) - A Co-Interactive Transformer for Joint Slot Filling and Intent Detection [61.109486326954205]
Intent detection and slot filling are two main tasks for building a spoken language understanding (SLU) system.
Previous studies either model the two tasks separately or only consider the single information flow from intent to slot.
We propose a Co-Interactive Transformer to consider the cross-impact between the two tasks simultaneously.
arXiv Detail & Related papers (2020-10-08T10:16:52Z) - Modality-Transferable Emotion Embeddings for Low-Resource Multimodal
Emotion Recognition [55.44502358463217]
We propose a modality-transferable model with emotion embeddings to tackle the aforementioned issues.
Our model achieves state-of-the-art performance on most of the emotion categories.
Our model also outperforms existing baselines in the zero-shot and few-shot scenarios for unseen emotions.
arXiv Detail & Related papers (2020-09-21T06:10:39Z) - End-to-end Emotion-Cause Pair Extraction via Learning to Link [18.741585103275334]
Emotion-cause pair extraction (ECPE) aims at jointly investigating emotions and their underlying causes in documents.
Existing approaches to ECPE generally adopt a two-stage method, i.e., (1) emotion and cause detection, and then (2) pairing the detected emotions and causes.
We propose a multi-task learning model that can extract emotions, causes and emotion-cause pairs simultaneously in an end-to-end manner.
arXiv Detail & Related papers (2020-02-25T07:49:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.