GraphCFC: A Directed Graph Based Cross-Modal Feature Complementation
Approach for Multimodal Conversational Emotion Recognition
- URL: http://arxiv.org/abs/2207.12261v4
- Date: Wed, 22 Nov 2023 16:59:55 GMT
- Title: GraphCFC: A Directed Graph Based Cross-Modal Feature Complementation
Approach for Multimodal Conversational Emotion Recognition
- Authors: Jiang Li, Xiaoping Wang, Guoqing Lv, Zhigang Zeng
- Abstract summary: Emotion Recognition in Conversation (ERC) plays a significant part in Human-Computer Interaction (HCI) systems since it can provide empathetic services.
In multimodal ERC, Graph Neural Networks (GNNs) are capable of extracting both long-distance contextual information and inter-modal interactive information.
We present a directed Graph based Cross-modal Feature Complementation (GraphCFC) module that can efficiently model contextual and interactive information.
- Score: 37.12407597998884
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Emotion Recognition in Conversation (ERC) plays a significant part in
Human-Computer Interaction (HCI) systems since it can provide empathetic
services. Multimodal ERC can mitigate the drawbacks of uni-modal approaches.
Recently, Graph Neural Networks (GNNs) have been widely used in a variety of
fields due to their superior performance in relation modeling. In multimodal
ERC, GNNs are capable of extracting both long-distance contextual information
and inter-modal interactive information. Unfortunately, since existing methods
such as MMGCN directly fuse multiple modalities, redundant information may be
generated and diverse information may be lost. In this work, we present a
directed Graph based Cross-modal Feature Complementation (GraphCFC) module that
can efficiently model contextual and interactive information. GraphCFC
alleviates the problem of heterogeneity gap in multimodal fusion by utilizing
multiple subspace extractors and Pair-wise Cross-modal Complementary (PairCC)
strategy. We extract various types of edges from the constructed graph for
encoding, thus enabling GNNs to extract crucial contextual and interactive
information more accurately when performing message passing. Furthermore, we
design a GNN structure called GAT-MLP, which can provide a new unified network
framework for multimodal learning. The experimental results on two benchmark
datasets show that our GraphCFC outperforms the state-of-the-art (SOTA)
approaches.
Related papers
- Masked Graph Learning with Recurrent Alignment for Multimodal Emotion Recognition in Conversation [12.455034591553506]
Multimodal Emotion Recognition in Conversation (MERC) can be applied to public opinion monitoring, intelligent dialogue robots, and other fields.
Previous work ignored the inter-modal alignment process and the intra-modal noise information before multimodal fusion.
We have developed a novel approach called Masked Graph Learning with Recursive Alignment (MGLRA) to tackle this problem.
arXiv Detail & Related papers (2024-07-23T02:23:51Z) - Revisiting Multimodal Emotion Recognition in Conversation from the Perspective of Graph Spectrum [13.81570624162769]
We propose a Graph-Spectrum-based Multimodal Consistency and Complementary collaborative learning framework GS-MCC.
First, GS-MCC uses a sliding window to construct a multimodal interaction graph to model conversational relationships.
Then, GS-MCC uses contrastive learning to construct self-supervised signals that reflect complementarity and consistent semantic collaboration.
arXiv Detail & Related papers (2024-04-27T10:47:07Z) - Information Screening whilst Exploiting! Multimodal Relation Extraction
with Feature Denoising and Multimodal Topic Modeling [96.75821232222201]
Existing research on multimodal relation extraction (MRE) faces two co-existing challenges, internal-information over-utilization and external-information under-exploitation.
We propose a novel framework that simultaneously implements the idea of internal-information screening and external-information exploiting.
arXiv Detail & Related papers (2023-05-19T14:56:57Z) - Multi-view Graph Convolutional Networks with Differentiable Node
Selection [29.575611350389444]
We propose a framework dubbed Multi-view Graph Convolutional Network with Differentiable Node Selection (MGCN-DNS)
MGCN-DNS accepts multi-channel graph-structural data as inputs and aims to learn more robust graph fusion through a differentiable neural network.
The effectiveness of the proposed method is verified by rigorous comparisons with considerable state-of-the-art approaches.
arXiv Detail & Related papers (2022-12-09T21:48:36Z) - Learnable Graph Convolutional Network and Feature Fusion for Multi-view
Learning [30.74535386745822]
This paper proposes a joint deep learning framework called Learnable Graph Convolutional Network and Feature Fusion (LGCN-FF)
It consists of two stages: feature fusion network and learnable graph convolutional network.
The proposed LGCN-FF is validated to be superior to various state-of-the-art methods in multi-view semi-supervised classification.
arXiv Detail & Related papers (2022-11-16T19:07:12Z) - Multi-agent Communication with Graph Information Bottleneck under
Limited Bandwidth (a position paper) [92.11330289225981]
In many real-world scenarios, communication can be expensive and the bandwidth of the multi-agent system is subject to certain constraints.
Redundant messages who occupy the communication resources can block the transmission of informative messages and thus jeopardize the performance.
We propose a novel multi-agent communication module, CommGIB, which effectively compresses the structure information and node information in the communication graph to deal with bandwidth-constrained settings.
arXiv Detail & Related papers (2021-12-20T07:53:44Z) - Soft Hierarchical Graph Recurrent Networks for Many-Agent Partially
Observable Environments [9.067091068256747]
We propose a novel network structure called hierarchical graph recurrent network(HGRN) for multi-agent cooperation under partial observability.
Based on the above technologies, we proposed a value-based MADRL algorithm called Soft-HGRN and its actor-critic variant named SAC-HRGN.
arXiv Detail & Related papers (2021-09-05T09:51:25Z) - Encoder Fusion Network with Co-Attention Embedding for Referring Image
Segmentation [87.01669173673288]
We propose an encoder fusion network (EFN), which transforms the visual encoder into a multi-modal feature learning network.
A co-attention mechanism is embedded in the EFN to realize the parallel update of multi-modal features.
The experiment results on four benchmark datasets demonstrate that the proposed approach achieves the state-of-the-art performance without any post-processing.
arXiv Detail & Related papers (2021-05-05T02:27:25Z) - Multi-Level Graph Convolutional Network with Automatic Graph Learning
for Hyperspectral Image Classification [63.56018768401328]
We propose a Multi-level Graph Convolutional Network (GCN) with Automatic Graph Learning method (MGCN-AGL) for HSI classification.
By employing attention mechanism to characterize the importance among spatially neighboring regions, the most relevant information can be adaptively incorporated to make decisions.
Our MGCN-AGL encodes the long range dependencies among image regions based on the expressive representations that have been produced at local level.
arXiv Detail & Related papers (2020-09-19T09:26:20Z) - Jointly Cross- and Self-Modal Graph Attention Network for Query-Based
Moment Localization [77.21951145754065]
We propose a novel Cross- and Self-Modal Graph Attention Network (CSMGAN) that recasts this task as a process of iterative messages passing over a joint graph.
Our CSMGAN is able to effectively capture high-order interactions between two modalities, thus enabling a further precise localization.
arXiv Detail & Related papers (2020-08-04T08:25:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.