RBA-GCN: Relational Bilevel Aggregation Graph Convolutional Network for
Emotion Recognition
- URL: http://arxiv.org/abs/2308.11029v2
- Date: Thu, 31 Aug 2023 04:36:30 GMT
- Title: RBA-GCN: Relational Bilevel Aggregation Graph Convolutional Network for
Emotion Recognition
- Authors: Lin Yuan, Guoheng Huang, Fenghuan Li, Xiaochen Yuan, Chi-Man Pun, Guo
Zhong
- Abstract summary: We present the relational bilevel aggregation graph convolutional network (RBA-GCN)
It consists of three modules: the graph generation module (GGM), similarity-based cluster building module (SCBM) and bilevel aggregation module (BiAM)
On both the IEMOCAP and MELD datasets, the weighted average F1 score of RBA-GCN has a 2.17$sim$5.21% improvement over that of the most advanced method.
- Score: 38.87080348908327
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Emotion recognition in conversation (ERC) has received increasing attention
from researchers due to its wide range of applications.As conversation has a
natural graph structure,numerous approaches used to model ERC based on graph
convolutional networks (GCNs) have yielded significant results.However,the
aggregation approach of traditional GCNs suffers from the node information
redundancy problem,leading to node discriminant information
loss.Additionally,single-layer GCNs lack the capacity to capture long-range
contextual information from the graph. Furthermore,the majority of approaches
are based on textual modality or stitching together different modalities,
resulting in a weak ability to capture interactions between modalities. To
address these problems, we present the relational bilevel aggregation graph
convolutional network (RBA-GCN), which consists of three modules: the graph
generation module (GGM), similarity-based cluster building module (SCBM) and
bilevel aggregation module (BiAM). First, GGM constructs a novel graph to
reduce the redundancy of target node information.Then,SCBM calculates the node
similarity in the target node and its structural neighborhood, where noisy
information with low similarity is filtered out to preserve the discriminant
information of the node. Meanwhile, BiAM is a novel aggregation method that can
preserve the information of nodes during the aggregation process. This module
can construct the interaction between different modalities and capture
long-range contextual information based on similarity clusters. On both the
IEMOCAP and MELD datasets, the weighted average F1 score of RBA-GCN has a
2.17$\sim$5.21\% improvement over that of the most advanced method.Our code is
available at https://github.com/luftmenscher/RBA-GCN and our article with the
same name has been published in IEEE/ACM Transactions on Audio,Speech,and
Language Processing,vol.31,2023
Related papers
- Self-Attention Empowered Graph Convolutional Network for Structure
Learning and Node Embedding [5.164875580197953]
In representation learning on graph-structured data, many popular graph neural networks (GNNs) fail to capture long-range dependencies.
This paper proposes a novel graph learning framework called the graph convolutional network with self-attention (GCN-SA)
The proposed scheme exhibits an exceptional generalization capability in node-level representation learning.
arXiv Detail & Related papers (2024-03-06T05:00:31Z) - Dual Contrastive Attributed Graph Clustering Network [6.796682703663566]
We propose a generic framework called Dual Contrastive Attributed Graph Clustering Network (DCAGC)
In DCAGC, by leveraging Neighborhood Contrast Module, the similarity of the neighbor nodes will be maximized and the quality of the node representation will be improved.
All the modules of DCAGC are trained and optimized in a unified framework, so the learned node representation contains clustering-oriented messages.
arXiv Detail & Related papers (2022-06-16T03:17:01Z) - RU-Net: Regularized Unrolling Network for Scene Graph Generation [92.95032610978511]
Scene graph generation (SGG) aims to detect objects and predict the relationships between each pair of objects.
Existing SGG methods usually suffer from several issues, including 1) ambiguous object representations, and 2) low diversity in relationship predictions.
We propose a regularized unrolling network (RU-Net) to address both problems.
arXiv Detail & Related papers (2022-05-03T04:21:15Z) - Graph Ordering Attention Networks [22.468776559433614]
Graph Neural Networks (GNNs) have been successfully used in many problems involving graph-structured data.
We introduce the Graph Ordering Attention (GOAT) layer, a novel GNN component that captures interactions between nodes in a neighborhood.
GOAT layer demonstrates its increased performance in modeling graph metrics that capture complex information.
arXiv Detail & Related papers (2022-04-11T18:13:19Z) - Graph Representation Learning via Contrasting Cluster Assignments [57.87743170674533]
We propose a novel unsupervised graph representation model by contrasting cluster assignments, called as GRCCA.
It is motivated to make good use of local and global information synthetically through combining clustering algorithms and contrastive learning.
GRCCA has strong competitiveness in most tasks.
arXiv Detail & Related papers (2021-12-15T07:28:58Z) - Node Embedding using Mutual Information and Self-Supervision based
Bi-level Aggregation [2.7088996845250897]
Graph Neural Networks learn low dimensional representations of nodes by aggregating information from their neighborhood in graphs.
We exploit mutual information (MI) to define two types of neighborhood, 1) textitLocal Neighborhood where nodes are densely connected within a community and each node would share higher MI with its neighbors, and 2) textitNon-Local Neighborhood where MI-based node clustering is introduced to assemble informative but graphically distant nodes in the same cluster.
We show that our model significantly outperforms the state-of-the-art methods in a wide range of assortative and disassortative graphs
arXiv Detail & Related papers (2021-04-27T07:32:57Z) - CaEGCN: Cross-Attention Fusion based Enhanced Graph Convolutional
Network for Clustering [51.62959830761789]
We propose a cross-attention based deep clustering framework, named Cross-Attention Fusion based Enhanced Graph Convolutional Network (CaEGCN)
CaEGCN contains four main modules: cross-attention fusion, Content Auto-encoder, Graph Convolutional Auto-encoder and self-supervised model.
Experimental results on different types of datasets prove the superiority and robustness of the proposed CaEGCN.
arXiv Detail & Related papers (2021-01-18T05:21:59Z) - Jointly Cross- and Self-Modal Graph Attention Network for Query-Based
Moment Localization [77.21951145754065]
We propose a novel Cross- and Self-Modal Graph Attention Network (CSMGAN) that recasts this task as a process of iterative messages passing over a joint graph.
Our CSMGAN is able to effectively capture high-order interactions between two modalities, thus enabling a further precise localization.
arXiv Detail & Related papers (2020-08-04T08:25:24Z) - Unifying Graph Convolutional Neural Networks and Label Propagation [73.82013612939507]
We study the relationship between LPA and GCN in terms of two aspects: feature/label smoothing and feature/label influence.
Based on our theoretical analysis, we propose an end-to-end model that unifies GCN and LPA for node classification.
Our model can also be seen as learning attention weights based on node labels, which is more task-oriented than existing feature-based attention models.
arXiv Detail & Related papers (2020-02-17T03:23:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.