Directed Acyclic Graph Network for Conversational Emotion Recognition
- URL: http://arxiv.org/abs/2105.12907v1
- Date: Thu, 27 May 2021 01:51:37 GMT
- Title: Directed Acyclic Graph Network for Conversational Emotion Recognition
- Authors: Weizhou Shen, Siyue Wu, Yunyi Yang and Xiaojun Quan
- Abstract summary: We propose a novel idea of encoding utterances with a directed acyclic graph (DAG) to better model the intrinsic structure within a conversation.
DAG-ERC provides a more intuitive way to model the information flow between long-distance conversation background and nearby context.
Experiments are conducted on four ERC benchmarks with state-of-the-art models employed as baselines for comparison.
- Score: 12.191046814462853
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The modeling of conversational context plays a vital role in emotion
recognition from conversation (ERC). In this paper, we put forward a novel idea
of encoding the utterances with a directed acyclic graph (DAG) to better model
the intrinsic structure within a conversation, and design a directed acyclic
neural network,~namely DAG-ERC, to implement this idea.~In an attempt to
combine the strengths of conventional graph-based neural models and
recurrence-based neural models,~DAG-ERC provides a more intuitive way to model
the information flow between long-distance conversation background and nearby
context.~Extensive experiments are conducted on four ERC benchmarks with
state-of-the-art models employed as baselines for comparison.~The empirical
results demonstrate the superiority of this new model and confirm the
motivation of the directed acyclic graph architecture for ERC.
Related papers
- Graph Collaborative Attention Network for Link Prediction in Knowledge Graphs [0.0]
We focus on KBGAT, a graph neural network model that leverages multi-head attention to jointly encode both entity and relation features within local neighborhood structures.<n>We introduce textbfGCAT (Graph Collaborative Attention Network), a refined model that enhances context aggregation and interaction between heterogeneous nodes.<n>Our findings highlight the advantages of attention-based architectures in capturing complex relational patterns for knowledge graph completion tasks.
arXiv Detail & Related papers (2025-07-05T08:13:09Z) - Generalized Factor Neural Network Model for High-dimensional Regression [50.554377879576066]
We tackle the challenges of modeling high-dimensional data sets with latent low-dimensional structures hidden within complex, non-linear, and noisy relationships.
Our approach enables a seamless integration of concepts from non-parametric regression, factor models, and neural networks for high-dimensional regression.
arXiv Detail & Related papers (2025-02-16T23:13:55Z) - Scalable Weibull Graph Attention Autoencoder for Modeling Document Networks [50.42343781348247]
We develop a graph Poisson factor analysis (GPFA) which provides analytic conditional posteriors to improve the inference accuracy.
We also extend GPFA to a multi-stochastic-layer version named graph Poisson gamma belief network (GPGBN) to capture the hierarchical document relationships at multiple semantic levels.
Our models can extract high-quality hierarchical latent document representations and achieve promising performance on various graph analytic tasks.
arXiv Detail & Related papers (2024-10-13T02:22:14Z) - Mapping EEG Signals to Visual Stimuli: A Deep Learning Approach to Match
vs. Mismatch Classification [28.186129896907694]
We propose a "match-vs-mismatch" deep learning model to classify whether a video clip induces excitatory responses in recorded EEG signals.
We demonstrate that the proposed model is able to achieve the highest accuracy on unseen subjects.
These results have the potential to facilitate the development of neural recording-based video reconstruction.
arXiv Detail & Related papers (2023-09-08T06:37:25Z) - Deep Emotion Recognition in Textual Conversations: A Survey [0.8602553195689513]
New applications and implementation scenarios present novel challenges and opportunities.
These range from leveraging the conversational context, speaker, and emotion dynamics modelling, to interpreting common sense expressions.
This survey emphasizes the advantage of leveraging techniques to address unbalanced data.
arXiv Detail & Related papers (2022-11-16T19:42:31Z) - Entity-Conditioned Question Generation for Robust Attention Distribution
in Neural Information Retrieval [51.53892300802014]
We show that supervised neural information retrieval models are prone to learning sparse attention patterns over passage tokens.
Using a novel targeted synthetic data generation method, we teach neural IR to attend more uniformly and robustly to all entities in a given passage.
arXiv Detail & Related papers (2022-04-24T22:36:48Z) - Reinforcement Learning based Path Exploration for Sequential Explainable
Recommendation [57.67616822888859]
We propose a novel Temporal Meta-path Guided Explainable Recommendation leveraging Reinforcement Learning (TMER-RL)
TMER-RL utilizes reinforcement item-item path modelling between consecutive items with attention mechanisms to sequentially model dynamic user-item evolutions on dynamic knowledge graph for explainable recommendation.
Extensive evaluations of TMER on two real-world datasets show state-of-the-art performance compared against recent strong baselines.
arXiv Detail & Related papers (2021-11-24T04:34:26Z) - TCL: Transformer-based Dynamic Graph Modelling via Contrastive Learning [87.38675639186405]
We propose a novel graph neural network approach, called TCL, which deals with the dynamically-evolving graph in a continuous-time fashion.
To the best of our knowledge, this is the first attempt to apply contrastive learning to representation learning on dynamic graphs.
arXiv Detail & Related papers (2021-05-17T15:33:25Z) - Correlation based Multi-phasal models for improved imagined speech EEG
recognition [22.196642357767338]
This work aims to profit from the parallel information contained in multi-phasal EEG data recorded while speaking, imagining and performing articulatory movements corresponding to specific speech units.
A bi-phase common representation learning module using neural networks is designed to model the correlation and between an analysis phase and a support phase.
The proposed approach further handles the non-availability of multi-phasal data during decoding.
arXiv Detail & Related papers (2020-11-04T09:39:53Z) - Keyphrase Extraction with Dynamic Graph Convolutional Networks and
Diversified Inference [50.768682650658384]
Keyphrase extraction (KE) aims to summarize a set of phrases that accurately express a concept or a topic covered in a given document.
Recent Sequence-to-Sequence (Seq2Seq) based generative framework is widely used in KE task, and it has obtained competitive performance on various benchmarks.
In this paper, we propose to adopt the Dynamic Graph Convolutional Networks (DGCN) to solve the above two problems simultaneously.
arXiv Detail & Related papers (2020-10-24T08:11:23Z) - Compact Graph Architecture for Speech Emotion Recognition [0.0]
A compact, efficient and scalable way to represent data is in the form of graphs.
We construct a Graph Convolution Network (GCN)-based architecture that can perform an accurate graph convolution.
Our model achieves comparable performance to the state-of-the-art with significantly fewer learnable parameters.
arXiv Detail & Related papers (2020-08-05T12:09:09Z) - Energy-based View of Retrosynthesis [70.66156081030766]
We propose a framework that unifies sequence- and graph-based methods as energy-based models.
We present a novel dual variant within the framework that performs consistent training over Bayesian forward- and backward-prediction.
This model improves state-of-the-art performance by 9.6% for template-free approaches where the reaction type is unknown.
arXiv Detail & Related papers (2020-07-14T18:51:06Z) - Utterance-level Sequential Modeling For Deep Gaussian Process Based
Speech Synthesis Using Simple Recurrent Unit [41.85906379846473]
We show that DGP can be applied to utterance-level modeling using recurrent architecture models.
We adopt a simple recurrent unit (SRU) for the proposed model to achieve a recurrent architecture.
The proposed SRU-DGP-based speech synthesis outperforms not only feed-forward DGP but also automatically tuned SRU- and long short-term memory (LSTM)-based neural networks.
arXiv Detail & Related papers (2020-04-22T19:51:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.