A Hierarchical Reasoning Graph Neural Network for The Automatic Scoring
of Answer Transcriptions in Video Job Interviews
- URL: http://arxiv.org/abs/2012.11960v1
- Date: Tue, 22 Dec 2020 12:27:45 GMT
- Title: A Hierarchical Reasoning Graph Neural Network for The Automatic Scoring
of Answer Transcriptions in Video Job Interviews
- Authors: Kai Chen, Meng Niu, Qingcai Chen
- Abstract summary: We propose a Hierarchical Reasoning Graph Neural Network (HRGNN) for the automatic assessment of question-answer pairs.
We employ a semantic-level reasoning graph attention network to model the interaction states of the current QA session.
Finally, we propose a gated recurrent unit encoder to represent the temporal question-answer pairs for the final prediction.
- Score: 14.091472037847499
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We address the task of automatically scoring the competency of candidates
based on textual features, from the automatic speech recognition (ASR)
transcriptions in the asynchronous video job interview (AVI). The key challenge
is how to construct the dependency relation between questions and answers, and
conduct the semantic level interaction for each question-answer (QA) pair.
However, most of the recent studies in AVI focus on how to represent questions
and answers better, but ignore the dependency information and interaction
between them, which is critical for QA evaluation. In this work, we propose a
Hierarchical Reasoning Graph Neural Network (HRGNN) for the automatic
assessment of question-answer pairs. Specifically, we construct a
sentence-level relational graph neural network to capture the dependency
information of sentences in or between the question and the answer. Based on
these graphs, we employ a semantic-level reasoning graph attention network to
model the interaction states of the current QA session. Finally, we propose a
gated recurrent unit encoder to represent the temporal question-answer pairs
for the final prediction. Empirical results conducted on CHNAT (a real-world
dataset) validate that our proposed model significantly outperforms
text-matching based benchmark models. Ablation studies and experimental results
with 10 random seeds also show the effectiveness and stability of our models.
Related papers
- QAGCF: Graph Collaborative Filtering for Q&A Recommendation [58.21387109664593]
Question and answer (Q&A) platforms usually recommend question-answer pairs to meet users' knowledge acquisition needs.
This makes user behaviors more complex, and presents two challenges for Q&A recommendation.
We introduce Question & Answer Graph Collaborative Filtering (QAGCF), a graph neural network model that creates separate graphs for collaborative and semantic views.
arXiv Detail & Related papers (2024-06-07T10:52:37Z) - Intrinsic Subgraph Generation for Interpretable Graph based Visual Question Answering [27.193336817953142]
We introduce an interpretable approach for graph-based Visual Question Answering (VQA)
Our model is designed to intrinsically produce a subgraph during the question-answering process as its explanation.
We compare these generated subgraphs against established post-hoc explainability methods for graph neural networks, and perform a human evaluation.
arXiv Detail & Related papers (2024-03-26T12:29:18Z) - Learning Situation Hyper-Graphs for Video Question Answering [95.18071873415556]
We propose an architecture for Video Question Answering (VQA) that enables answering questions related to video content by predicting situation hyper-graphs.
We train a situation hyper-graph decoder to implicitly identify graph representations with actions and object/human-object relationships from the input video clip.
Our results show that learning the underlying situation hyper-graphs helps the system to significantly improve its performance for novel challenges of video question-answering tasks.
arXiv Detail & Related papers (2023-04-18T01:23:11Z) - Question-Answer Sentence Graph for Joint Modeling Answer Selection [122.29142965960138]
We train and integrate state-of-the-art (SOTA) models for computing scores between question-question, question-answer, and answer-answer pairs.
Online inference is then performed to solve the AS2 task on unseen queries.
arXiv Detail & Related papers (2022-02-16T05:59:53Z) - Bilateral Cross-Modality Graph Matching Attention for Feature Fusion in
Visual Question Answering [71.6781118080461]
We propose a Graph Matching Attention (GMA) network for Visual Question Answering (VQA) task.
firstly, it builds graph for the image, but also constructs graph for the question in terms of both syntactic and embedding information.
Next, we explore the intra-modality relationships by a dual-stage graph encoder and then present a bilateral cross-modality graph matching attention to infer the relationships between the image and the question.
Experiments demonstrate that our network achieves state-of-the-art performance on the GQA dataset and the VQA 2.0 dataset.
arXiv Detail & Related papers (2021-12-14T10:01:26Z) - Classification-Regression for Chart Comprehension [16.311371103939205]
Chart question answering (CQA) is a task used for assessing chart comprehension.
We propose a new model that jointly learns classification and regression.
Our model's edge is particularly emphasized on questions with out-of-vocabulary answers.
arXiv Detail & Related papers (2021-11-29T18:46:06Z) - Question Answering Infused Pre-training of General-Purpose
Contextualized Representations [70.62967781515127]
We propose a pre-training objective based on question answering (QA) for learning general-purpose contextual representations.
We accomplish this goal by training a bi-encoder QA model, which independently encodes passages and questions, to match the predictions of a more accurate cross-encoder model.
We show large improvements over both RoBERTa-large and previous state-of-the-art results on zero-shot and few-shot paraphrase detection.
arXiv Detail & Related papers (2021-06-15T14:45:15Z) - DAGN: Discourse-Aware Graph Network for Logical Reasoning [83.8041050565304]
We propose a discourse-aware graph network (DAGN) that reasons relying on the discourse structure of the texts.
The model encodes discourse information as a graph with elementary discourse units (EDUs) and discourse relations, and learns the discourse-aware features via a graph network for downstream QA tasks.
arXiv Detail & Related papers (2021-03-26T09:41:56Z) - A Graph Reasoning Network for Multi-turn Response Selection via
Customized Pre-training [11.532734330690584]
We propose a graph-reasoning network (GRN) to address the problem.
GRN first conducts pre-training based on ALBERT.
We then fine-tune the model on an integrated network with sequence reasoning and graph reasoning structures.
arXiv Detail & Related papers (2020-12-21T03:38:29Z) - Self-supervised pre-training and contrastive representation learning for
multiple-choice video QA [39.78914328623504]
Video Question Answering (Video QA) requires fine-grained understanding of both video and language modalities to answer the given questions.
We propose novel training schemes for multiple-choice video question answering with a self-supervised pre-training stage and a supervised contrastive learning in the main stage as an auxiliary learning.
We evaluate our proposed model on highly competitive benchmark datasets related to multiple-choice video QA: TVQA, TVQA+, and DramaQA.
arXiv Detail & Related papers (2020-09-17T03:37:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.