Modular AI-Powered Interviewer with Dynamic Question Generation and Expertise Profiling
- URL: http://arxiv.org/abs/2601.11534v1
- Date: Fri, 21 Nov 2025 18:25:26 GMT
- Title: Modular AI-Powered Interviewer with Dynamic Question Generation and Expertise Profiling
- Authors: Aisvarya Adeseye, Jouni Isoaho, Seppo Virtanen, Mohammad Tahir,
- Abstract summary: This study presents an AI-powered interviewer that dynamically generates questions that are contextually appropriate and expertise aligned.<n>The interviewer is built on a locally hosted large language model (LLM) that generates coherent dialogue while preserving data privacy.<n>The proposed interviewer is a scalable, privacy-conscious solution that advances AI-assisted qualitative data collection.
- Score: 0.7349727826230863
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automated interviewers and chatbots are common in research, recruitment, customer service, and education. Many existing systems use fixed question lists, strict rules, and limited personalization, leading to repeated conversations that cause low engagement. Therefore, these tools are not effective for complex qualitative research, which requires flexibility, context awareness, and ethical sensitivity. Consequently, there is a need for a more adaptive and context-aware interviewing system. To address this, an AI-powered interviewer that dynamically generates questions that are contextually appropriate and expertise aligned is presented in this study. The interviewer is built on a locally hosted large language model (LLM) that generates coherent dialogue while preserving data privacy. The interviewer profiles the participants' expertise in real time to generate knowledge-appropriate questions, well-articulated responses, and smooth transition messages similar to human-like interviews. To implement these functionalities, a modular prompt engineering pipeline was designed to ensure that the interview conversation remains scalable, adaptive, and semantically rich. To evaluate the AI-powered interviewer, it was tested with various participants, and it achieved high satisfaction (mean 4.45) and engagement (mean 4.33). The proposed interviewer is a scalable, privacy-conscious solution that advances AI-assisted qualitative data collection.
Related papers
- SparkMe: Adaptive Semi-Structured Interviewing for Qualitative Insight Discovery [55.50580661343875]
We introduce SparkMe, a multi-agent interviewer that performs deliberative planning via simulated conversation rollouts to select questions with high expected utility.<n>We evaluate SparkMe through controlled experiments with LLM-based interviewees, showing that it achieves higher interview utility.<n>We further validate SparkMe in a user study with 70 participants across 7 professions on the impact of AI on their professions.
arXiv Detail & Related papers (2026-02-24T17:33:02Z) - Mic Drop or Data Flop? Evaluating the Fitness for Purpose of AI Voice Interviewers for Data Collection within Quantitative & Qualitative Research Contexts [7.938565669618949]
Transformer-based Large Language Models (LLMs) have paved the way for "AI interviewers" that can administer voice-based surveys with respondents in real-time.<n>We evaluate the capabilities of AI interviewers as well as current Interactive Voice Response (IVR) systems across two dimensions.
arXiv Detail & Related papers (2025-09-01T22:44:57Z) - Requirements Elicitation Follow-Up Question Generation [0.5120567378386615]
Large language models (LLMs) have exhibited state-of-the-art performance in multiple natural language processing tasks.<n>This study investigates the application of GPT-4o to generate follow-up interview questions during requirements elicitation.
arXiv Detail & Related papers (2025-07-03T17:59:04Z) - AI-Assisted Conversational Interviewing: Effects on Data Quality and User Experience [0.0]
This study introduces a framework for AI-assisted conversational interviewing.<n>We conducted a web survey experiment where 1,800 participants were randomly assigned to AI 'chatbots'<n>Our findings reveal the feasibility of using AI methods such as chatbots to enhance open-ended data collection in web surveys.
arXiv Detail & Related papers (2025-04-09T13:58:07Z) - Using Large Language Models to Develop Requirements Elicitation Skills [1.1473376666000734]
We propose conditioning a large language model to play the role of the client during a chat-based interview.<n>We find that both approaches provide sufficient information for participants to construct technically sound solutions.
arXiv Detail & Related papers (2025-03-10T19:27:38Z) - AI Conversational Interviewing: Transforming Surveys with LLMs as Adaptive Interviewers [40.80290002598963]
This study explores the potential of replacing human interviewers with large language models (LLMs) to conduct scalable conversational interviews.<n>We conducted a small-scale, in-depth study with university students who were randomly assigned to a conversational interview by either AI or human interviewers.<n>Various quantitative and qualitative measures assessed interviewer adherence to guidelines, response quality, participant engagement, and overall interview efficacy.
arXiv Detail & Related papers (2024-09-16T16:03:08Z) - InterviewBot: Real-Time End-to-End Dialogue System to Interview Students
for College Admission [18.630848902825406]
InterviewBot integrates conversation history and customized topics into a coherent embedding space.
7,361 audio recordings of human-to-human interviews are automatically transcribed, where 440 are manually corrected for finetuning and evaluation.
InterviewBot is tested both statistically by comparing its responses to the interview data and dynamically by inviting professional interviewers and various students to interact with it in real-time.
arXiv Detail & Related papers (2023-03-27T09:46:56Z) - EZInterviewer: To Improve Job Interview Performance with Mock Interview
Generator [60.2099886983184]
EZInterviewer aims to learn from the online interview data and provides mock interview services to the job seekers.
To address the low-resource challenge, EZInterviewer is trained on a very small set of interview dialogs.
arXiv Detail & Related papers (2023-01-03T07:00:30Z) - What should I Ask: A Knowledge-driven Approach for Follow-up Questions
Generation in Conversational Surveys [63.51903260461746]
We propose a novel task for knowledge-driven follow-up question generation in conversational surveys.
We constructed a new human-annotated dataset of human-written follow-up questions with dialogue history and labeled knowledge.
We then propose a two-staged knowledge-driven model for the task, which generates informative and coherent follow-up questions.
arXiv Detail & Related papers (2022-05-23T00:57:33Z) - End-to-end Spoken Conversational Question Answering: Task, Dataset and
Model [92.18621726802726]
In spoken question answering, the systems are designed to answer questions from contiguous text spans within the related speech transcripts.
We propose a new Spoken Conversational Question Answering task (SCQA), aiming at enabling the systems to model complex dialogue flows.
Our main objective is to build the system to deal with conversational questions based on the audio recordings, and to explore the plausibility of providing more cues from different modalities with systems in information gathering.
arXiv Detail & Related papers (2022-04-29T17:56:59Z) - Towards Data Distillation for End-to-end Spoken Conversational Question
Answering [65.124088336738]
We propose a new Spoken Conversational Question Answering task (SCQA)
SCQA aims at enabling QA systems to model complex dialogues flow given the speech utterances and text corpora.
Our main objective is to build a QA system to deal with conversational questions both in spoken and text forms.
arXiv Detail & Related papers (2020-10-18T05:53:39Z) - Inquisitive Question Generation for High Level Text Comprehension [60.21497846332531]
We introduce INQUISITIVE, a dataset of 19K questions that are elicited while a person is reading through a document.
We show that readers engage in a series of pragmatic strategies to seek information.
We evaluate question generation models based on GPT-2 and show that our model is able to generate reasonable questions.
arXiv Detail & Related papers (2020-10-04T19:03:39Z) - Learning an Effective Context-Response Matching Model with
Self-Supervised Tasks for Retrieval-based Dialogues [88.73739515457116]
We introduce four self-supervised tasks including next session prediction, utterance restoration, incoherence detection and consistency discrimination.
We jointly train the PLM-based response selection model with these auxiliary tasks in a multi-task manner.
Experiment results indicate that the proposed auxiliary self-supervised tasks bring significant improvement for multi-turn response selection.
arXiv Detail & Related papers (2020-09-14T08:44:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.