Exploring Emerging Technologies for Requirements Elicitation Interview
Training: Empirical Assessment of Robotic and Virtual Tutors
- URL: http://arxiv.org/abs/2305.00077v3
- Date: Wed, 30 Aug 2023 14:39:22 GMT
- Title: Exploring Emerging Technologies for Requirements Elicitation Interview
Training: Empirical Assessment of Robotic and Virtual Tutors
- Authors: Binnur G\"orer and Fatma Ba\c{s}ak Aydemir
- Abstract summary: We propose an architecture for Requirements Elicitation Interview Training system based on emerging educational technologies.
We demonstrate the applicability of REIT through two implementations: Ro with a physical robotic agent and Vo with a virtual voice-only agent.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Requirements elicitation interviews are a widely adopted technique, where the
interview success heavily depends on the interviewer's preparedness and
communication skills. Students can enhance these skills through practice
interviews. However, organizing practice interviews for many students presents
scalability challenges, given the time and effort required to involve
stakeholders in each session. To address this, we propose REIT, an extensible
architecture for Requirements Elicitation Interview Training system based on
emerging educational technologies. REIT has components to support both the
interview phase, wherein students act as interviewers while the system assumes
the role of an interviewee, and the feedback phase, during which the system
assesses students' performance and offers contextual and behavioral feedback to
enhance their interviewing skills. We demonstrate the applicability of REIT
through two implementations: RoREIT with a physical robotic agent and VoREIT
with a virtual voice-only agent. We empirically evaluated both instances with a
group of graduate students. The participants appreciated both systems. They
demonstrated higher learning gain when trained with RoREIT, but they found
VoREIT more engaging and easier to use. These findings indicate that each
system has distinct benefits and drawbacks, suggesting that REIT can be
realized for various educational settings based on preferences and available
resources.
Related papers
- InsightPulse: An IoT-based System for User Experience Interview Analysis [1.7533975800877244]
This paper introduces InsightPulse, an Internet of Things (IoT)-based hardware and software system designed to streamline and enhance the UX interview process through speech analysis and Artificial Intelligence.
InsightPulse provides real-time support during user interviews by automatically identifying and highlighting key discussion points, proactively suggesting follow-up questions, and generating thematic summaries.
The system features a robust backend analytics dashboard that simplifies the post-interview review process, thus facilitating the quick extraction of actionable insights and enhancing overall UX research efficiency.
arXiv Detail & Related papers (2024-09-23T21:39:34Z) - AI Conversational Interviewing: Transforming Surveys with LLMs as Adaptive Interviewers [40.80290002598963]
This study explores the potential of replacing human interviewers with large language models (LLMs) to conduct scalable conversational interviews.
We conducted a small-scale, in-depth study with university students who were randomly assigned to be interviewed by either AI or human interviewers.
Various quantitative and qualitative measures assessed interviewer adherence to guidelines, response quality, participant engagement, and overall interview efficacy.
arXiv Detail & Related papers (2024-09-16T16:03:08Z) - Facilitating Multi-Role and Multi-Behavior Collaboration of Large Language Models for Online Job Seeking and Recruiting [51.54907796704785]
Existing methods rely on modeling the latent semantics of resumes and job descriptions and learning a matching function between them.
Inspired by the powerful role-playing capabilities of Large Language Models (LLMs), we propose to introduce a mock interview process between LLM-played interviewers and candidates.
We propose MockLLM, a novel applicable framework that divides the person-job matching process into two modules: mock interview generation and two-sided evaluation in handshake protocol.
arXiv Detail & Related papers (2024-05-28T12:23:16Z) - K-ESConv: Knowledge Injection for Emotional Support Dialogue Systems via
Prompt Learning [83.19215082550163]
We propose K-ESConv, a novel prompt learning based knowledge injection method for emotional support dialogue system.
We evaluate our model on an emotional support dataset ESConv, where the model retrieves and incorporates knowledge from external professional emotional Q&A forum.
arXiv Detail & Related papers (2023-12-16T08:10:10Z) - UKP-SQuARE: An Interactive Tool for Teaching Question Answering [61.93372227117229]
The exponential growth of question answering (QA) has made it an indispensable topic in any Natural Language Processing (NLP) course.
We introduce UKP-SQuARE as a platform for QA education.
Students can run, compare, and analyze various QA models from different perspectives.
arXiv Detail & Related papers (2023-05-31T11:29:04Z) - FCC: Fusing Conversation History and Candidate Provenance for Contextual
Response Ranking in Dialogue Systems [53.89014188309486]
We present a flexible neural framework that can integrate contextual information from multiple channels.
We evaluate our model on the MSDialog dataset widely used for evaluating conversational response ranking tasks.
arXiv Detail & Related papers (2023-03-31T23:58:28Z) - "GAN I hire you?" -- A System for Personalized Virtual Job Interview
Training [49.201250723083]
This study develops an interactive job interview training system with a Generative Adversarial Network (GAN)-based approach.
The overall study results indicate that the GAN-based generated behavioral feedback is helpful.
arXiv Detail & Related papers (2022-06-08T13:03:39Z) - Achieving Human Parity on Visual Question Answering [67.22500027651509]
The Visual Question Answering (VQA) task utilizes both visual image and language analysis to answer a textual question with respect to an image.
This paper describes our recent research of AliceMind-MMU that obtains similar or even slightly better results than human beings does on VQA.
This is achieved by systematically improving the VQA pipeline including: (1) pre-training with comprehensive visual and textual feature representation; (2) effective cross-modal interaction with learning to attend; and (3) A novel knowledge mining framework with specialized expert modules for the complex VQA task.
arXiv Detail & Related papers (2021-11-17T04:25:11Z) - Leveraging Multimodal Behavioral Analytics for Automated Job Interview
Performance Assessment and Feedback [0.5872014229110213]
Behavioral cues play a significant part in human communication and cognitive perception.
We propose a multimodal analytical framework that analyzes the candidate in an interview scenario.
We use these multimodal data sources to construct a composite representation, which is used for training machine learning classifiers to predict the class labels.
arXiv Detail & Related papers (2020-06-14T14:20:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.