AI Conversational Interviewing: Transforming Surveys with LLMs as Adaptive Interviewers
- URL: http://arxiv.org/abs/2410.01824v1
- Date: Mon, 16 Sep 2024 16:03:08 GMT
- Title: AI Conversational Interviewing: Transforming Surveys with LLMs as Adaptive Interviewers
- Authors: Alexander Wuttke, Matthias Aßenmacher, Christopher Klamm, Max M. Lang, Quirin Würschinger, Frauke Kreuter,
- Abstract summary: This study explores the potential of replacing human interviewers with large language models (LLMs) to conduct scalable conversational interviews.
We conducted a small-scale, in-depth study with university students who were randomly assigned to be interviewed by either AI or human interviewers.
Various quantitative and qualitative measures assessed interviewer adherence to guidelines, response quality, participant engagement, and overall interview efficacy.
- Score: 40.80290002598963
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Traditional methods for eliciting people's opinions face a trade-off between depth and scale: structured surveys enable large-scale data collection but limit respondents' ability to express unanticipated thoughts in their own words, while conversational interviews provide deeper insights but are resource-intensive. This study explores the potential of replacing human interviewers with large language models (LLMs) to conduct scalable conversational interviews. Our goal is to assess the performance of AI Conversational Interviewing and to identify opportunities for improvement in a controlled environment. We conducted a small-scale, in-depth study with university students who were randomly assigned to be interviewed by either AI or human interviewers, both employing identical questionnaires on political topics. Various quantitative and qualitative measures assessed interviewer adherence to guidelines, response quality, participant engagement, and overall interview efficacy. The findings indicate the viability of AI Conversational Interviewing in producing quality data comparable to traditional methods, with the added benefit of scalability. Based on our experiences, we present specific recommendations for effective implementation.
Related papers
- "I Never Said That": A dataset, taxonomy and baselines on response clarity classification [4.16330182801919]
We introduce a novel taxonomy that frames the task of detecting and classifying response clarity.
Our proposed two-level taxonomy addresses the clarity of a response in terms of the information provided for a given question.
We combine ChatGPT and human annotators to collect, validate and annotate discrete QA pairs from political interviews.
arXiv Detail & Related papers (2024-09-20T20:15:06Z) - Automated Speaking Assessment of Conversation Tests with Novel Graph-based Modeling on Spoken Response Coherence [11.217656140423207]
ASAC aims to evaluate the overall speaking proficiency of an L2 speaker in a setting where an interlocutor interacts with one or more candidates.
We propose a hierarchical graph model that aptly incorporates both broad inter-response interactions and nuanced semantic information.
Extensive experimental results on the NICT-JLE benchmark dataset suggest that our proposed modeling approach can yield considerable improvements in prediction accuracy.
arXiv Detail & Related papers (2024-09-11T07:24:07Z) - Learning to Clarify: Multi-turn Conversations with Action-Based Contrastive Self-Training [33.57497419019826]
Action-Based Contrastive Self-Training allows for sample-efficient dialogue policy learning in multi-turn conversation.
ACT demonstrates substantial conversation modeling improvements over standard approaches to supervised fine-tuning and DPO.
arXiv Detail & Related papers (2024-05-31T22:44:48Z) - Facilitating Multi-Role and Multi-Behavior Collaboration of Large Language Models for Online Job Seeking and Recruiting [51.54907796704785]
Existing methods rely on modeling the latent semantics of resumes and job descriptions and learning a matching function between them.
Inspired by the powerful role-playing capabilities of Large Language Models (LLMs), we propose to introduce a mock interview process between LLM-played interviewers and candidates.
We propose MockLLM, a novel applicable framework that divides the person-job matching process into two modules: mock interview generation and two-sided evaluation in handshake protocol.
arXiv Detail & Related papers (2024-05-28T12:23:16Z) - Context Retrieval via Normalized Contextual Latent Interaction for
Conversational Agent [3.9635467316436133]
We present a novel method, PK-NCLI, that is able to accurately and efficiently identify relevant auxiliary information to improve the quality of conversational responses.
Our experimental results indicate that PK-NCLI outperforms the state-of-the-art method, PK-FoCus, in terms of perplexity, knowledge grounding, and training efficiency.
arXiv Detail & Related papers (2023-12-01T18:53:51Z) - ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate [57.71597869337909]
We build a multi-agent referee team called ChatEval to autonomously discuss and evaluate the quality of generated responses from different models.
Our analysis shows that ChatEval transcends mere textual scoring, offering a human-mimicking evaluation process for reliable assessments.
arXiv Detail & Related papers (2023-08-14T15:13:04Z) - EZInterviewer: To Improve Job Interview Performance with Mock Interview
Generator [60.2099886983184]
EZInterviewer aims to learn from the online interview data and provides mock interview services to the job seekers.
To address the low-resource challenge, EZInterviewer is trained on a very small set of interview dialogs.
arXiv Detail & Related papers (2023-01-03T07:00:30Z) - Towards Data Distillation for End-to-end Spoken Conversational Question
Answering [65.124088336738]
We propose a new Spoken Conversational Question Answering task (SCQA)
SCQA aims at enabling QA systems to model complex dialogues flow given the speech utterances and text corpora.
Our main objective is to build a QA system to deal with conversational questions both in spoken and text forms.
arXiv Detail & Related papers (2020-10-18T05:53:39Z) - Learning an Effective Context-Response Matching Model with
Self-Supervised Tasks for Retrieval-based Dialogues [88.73739515457116]
We introduce four self-supervised tasks including next session prediction, utterance restoration, incoherence detection and consistency discrimination.
We jointly train the PLM-based response selection model with these auxiliary tasks in a multi-task manner.
Experiment results indicate that the proposed auxiliary self-supervised tasks bring significant improvement for multi-turn response selection.
arXiv Detail & Related papers (2020-09-14T08:44:46Z) - Leveraging Multimodal Behavioral Analytics for Automated Job Interview
Performance Assessment and Feedback [0.5872014229110213]
Behavioral cues play a significant part in human communication and cognitive perception.
We propose a multimodal analytical framework that analyzes the candidate in an interview scenario.
We use these multimodal data sources to construct a composite representation, which is used for training machine learning classifiers to predict the class labels.
arXiv Detail & Related papers (2020-06-14T14:20:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.