Co-evolving Graph Reasoning Network for Emotion-Cause Pair Extraction
- URL: http://arxiv.org/abs/2306.04340v1
- Date: Wed, 7 Jun 2023 11:11:12 GMT
- Title: Co-evolving Graph Reasoning Network for Emotion-Cause Pair Extraction
- Authors: Bowen Xing and Ivor W. Tsang
- Abstract summary: We propose a new MTL framework based on Co-evolving Reasoning.
We show that our model achieves new state-of-the-art performance.
- Score: 39.76268402567324
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Emotion-Cause Pair Extraction (ECPE) aims to extract all emotion clauses and
their corresponding cause clauses from a document. Existing approaches tackle
this task through multi-task learning (MTL) framework in which the two subtasks
provide indicative clues for ECPE. However, the previous MTL framework
considers only one round of multi-task reasoning and ignores the reverse
feedbacks from ECPE to the subtasks. Besides, its multi-task reasoning only
relies on semantics-level interactions, which cannot capture the explicit
dependencies, and both the encoder sharing and multi-task hidden states
concatenations can hardly capture the causalities. To solve these issues, we
first put forward a new MTL framework based on Co-evolving Reasoning. It (1)
models the bidirectional feedbacks between ECPE and its subtasks; (2) allows
the three tasks to evolve together and prompt each other recurrently; (3)
integrates prediction-level interactions to capture explicit dependencies. Then
we propose a novel multi-task relational graph (MRG) to sufficiently exploit
the causal relations. Finally, we propose a Co-evolving Graph Reasoning Network
(CGR-Net) that implements our MTL framework and conducts Co-evolving Reasoning
on MRG. Experimental results show that our model achieves new state-of-the-art
performance, and further analysis confirms the advantages of our method.
Related papers
- Iteration of Thought: Leveraging Inner Dialogue for Autonomous Large Language Model Reasoning [0.0]
Iterative human engagement is a common and effective means of leveraging the advanced language processing power of large language models (LLMs)
We propose the Iteration of Thought (IoT) framework for enhancing LLM responses by generating "thought"-provoking prompts.
Unlike static or semi-static approaches, IoT adapts its reasoning path dynamically, based on evolving context.
arXiv Detail & Related papers (2024-09-19T09:44:17Z) - Cantor: Inspiring Multimodal Chain-of-Thought of MLLM [83.6663322930814]
We argue that converging visual context acquisition and logical reasoning is pivotal for tackling visual reasoning tasks.
We propose an innovative multimodal CoT framework, termed Cantor, characterized by a perception-decision architecture.
Our experiments demonstrate the efficacy of the proposed framework, showing significant improvements in multimodal CoT performance.
arXiv Detail & Related papers (2024-04-24T17:59:48Z) - A Novel Energy based Model Mechanism for Multi-modal Aspect-Based
Sentiment Analysis [85.77557381023617]
We propose a novel framework called DQPSA for multi-modal sentiment analysis.
PDQ module uses the prompt as both a visual query and a language query to extract prompt-aware visual information.
EPE module models the boundaries pairing of the analysis target from the perspective of an Energy-based Model.
arXiv Detail & Related papers (2023-12-13T12:00:46Z) - Co-guiding for Multi-intent Spoken Language Understanding [53.30511968323911]
We propose a novel model termed Co-guiding Net, which implements a two-stage framework achieving the mutual guidances between the two tasks.
For the first stage, we propose single-task supervised contrastive learning, and for the second stage, we propose co-guiding supervised contrastive learning.
Experiment results on multi-intent SLU show that our model outperforms existing models by a large margin.
arXiv Detail & Related papers (2023-11-22T08:06:22Z) - Multi-Grained Multimodal Interaction Network for Entity Linking [65.30260033700338]
Multimodal entity linking task aims at resolving ambiguous mentions to a multimodal knowledge graph.
We propose a novel Multi-GraIned Multimodal InteraCtion Network $textbf(MIMIC)$ framework for solving the MEL task.
arXiv Detail & Related papers (2023-07-19T02:11:19Z) - Joint Alignment of Multi-Task Feature and Label Spaces for Emotion Cause
Pair Extraction [36.123715709125015]
Emotion cause pair extraction (ECPE) is one of the derived subtasks of emotion cause analysis (ECA)
ECPE shares rich inter-related features with emotion extraction (EE) and cause extraction (CE)
arXiv Detail & Related papers (2022-09-09T04:06:27Z) - Modeling Task Interactions in Document-Level Joint Entity and Relation
Extraction [20.548299226366193]
Graph Compatibility (GC) is designed to leverage task characteristics, bridging decisions of two tasks for direct task interference.
GC achieves the best performance by up to 2.3/5.1 F1 improvement over the baseline.
arXiv Detail & Related papers (2022-05-04T06:18:28Z) - Bidirectional Machine Reading Comprehension for Aspect Sentiment Triplet
Extraction [8.208671244754317]
Aspect sentiment triplet extraction (ASTE) is an emerging task in fine-grained opinion mining.
We transform ASTE task into a multi-turn machine reading comprehension (MTMRC) task.
We propose a bidirectional MRC (BMRC) framework to address this challenge.
arXiv Detail & Related papers (2021-03-13T09:30:47Z) - Cascaded Human-Object Interaction Recognition [175.60439054047043]
We introduce a cascade architecture for a multi-stage, coarse-to-fine HOI understanding.
At each stage, an instance localization network progressively refines HOI proposals and feeds them into an interaction recognition network.
With our carefully-designed human-centric relation features, these two modules work collaboratively towards effective interaction understanding.
arXiv Detail & Related papers (2020-03-09T17:05:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.