GPT-Powered Elicitation Interview Script Generator for Requirements Engineering Training
- URL: http://arxiv.org/abs/2406.11439v1
- Date: Mon, 17 Jun 2024 11:53:55 GMT
- Title: GPT-Powered Elicitation Interview Script Generator for Requirements Engineering Training
- Authors: Binnur Görer, Fatma Başak Aydemir,
- Abstract summary: We develop a specialized GPT agent for auto-generating interview scripts.
The GPT agent is equipped with a dedicated knowledge base tailored to the guidelines and best practices of requirements elicitation interview procedures.
We employ a prompt chaining approach to mitigate the output length constraint of GPT to be able to generate thorough and detailed interview scripts.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Elicitation interviews are the most common requirements elicitation technique, and proficiency in conducting these interviews is crucial for requirements elicitation. Traditional training methods, typically limited to textbook learning, may not sufficiently address the practical complexities of interviewing techniques. Practical training with various interview scenarios is important for understanding how to apply theoretical knowledge in real-world contexts. However, there is a shortage of educational interview material, as creating interview scripts requires both technical expertise and creativity. To address this issue, we develop a specialized GPT agent for auto-generating interview scripts. The GPT agent is equipped with a dedicated knowledge base tailored to the guidelines and best practices of requirements elicitation interview procedures. We employ a prompt chaining approach to mitigate the output length constraint of GPT to be able to generate thorough and detailed interview scripts. This involves dividing the interview into sections and crafting distinct prompts for each, allowing for the generation of complete content for each section. The generated scripts are assessed through standard natural language generation evaluation metrics and an expert judgment study, confirming their applicability in requirements engineering training.
Related papers
- Benchmarking Large Language Models for Conversational Question Answering in Multi-instructional Documents [61.41316121093604]
We present InsCoQA, a novel benchmark for evaluating large language models (LLMs) in the context of conversational question answering (CQA)
Sourced from extensive, encyclopedia-style instructional content, InsCoQA assesses models on their ability to retrieve, interpret, and accurately summarize procedural guidance from multiple documents.
We also propose InsEval, an LLM-assisted evaluator that measures the integrity and accuracy of generated responses and procedural instructions.
arXiv Detail & Related papers (2024-10-01T09:10:00Z) - Exploring Prompt Engineering Practices in the Enterprise [3.7882262667445734]
A prompt is a natural language instruction designed to elicit certain behaviour or output from a model.
For complex tasks and tasks with specific requirements, prompt design is not trivial.
We analyze sessions of prompt editing behavior, categorizing the parts of prompts users iterated on and the types of changes they made.
arXiv Detail & Related papers (2024-03-13T20:32:32Z) - Exploring Emerging Technologies for Requirements Elicitation Interview
Training: Empirical Assessment of Robotic and Virtual Tutors [0.0]
We propose an architecture for Requirements Elicitation Interview Training system based on emerging educational technologies.
We demonstrate the applicability of REIT through two implementations: Ro with a physical robotic agent and Vo with a virtual voice-only agent.
arXiv Detail & Related papers (2023-04-28T20:03:48Z) - EZInterviewer: To Improve Job Interview Performance with Mock Interview
Generator [60.2099886983184]
EZInterviewer aims to learn from the online interview data and provides mock interview services to the job seekers.
To address the low-resource challenge, EZInterviewer is trained on a very small set of interview dialogs.
arXiv Detail & Related papers (2023-01-03T07:00:30Z) - Prompting Language Models for Linguistic Structure [73.11488464916668]
We present a structured prompting approach for linguistic structured prediction tasks.
We evaluate this approach on part-of-speech tagging, named entity recognition, and sentence chunking.
We find that while PLMs contain significant prior knowledge of task labels due to task leakage into the pretraining corpus, structured prompting can also retrieve linguistic structure with arbitrary labels.
arXiv Detail & Related papers (2022-11-15T01:13:39Z) - Context-Tuning: Learning Contextualized Prompts for Natural Language
Generation [52.835877179365525]
We propose a novel continuous prompting approach, called Context-Tuning, to fine-tuning PLMs for natural language generation.
Firstly, the prompts are derived based on the input text, so that they can elicit useful knowledge from PLMs for generation.
Secondly, to further enhance the relevance of the generated text to the inputs, we utilize continuous inverse prompting to refine the process of natural language generation.
arXiv Detail & Related papers (2022-01-21T12:35:28Z) - Deep Learning Interviews: Hundreds of fully solved job interview
questions from a wide range of key topics in AI [2.0305676256390934]
Deep Learning Interviews is designed to both rehearse interview or exam specific topics and provide a well-organized overview of the field.
The book's contents is a large inventory of numerous topics relevant to DL job interviews and graduate level exams.
It is widely accepted that the training of every computer scientist must include the fundamental theorems of ML.
arXiv Detail & Related papers (2021-12-30T13:28:27Z) - Dialogue-oriented Pre-training [70.03028879331339]
We propose three strategies to simulate the conversation features on general plain text.
Dialog-PrLM is fine-tuned on three public multi-turn dialogue datasets.
arXiv Detail & Related papers (2021-06-01T12:02:46Z) - Inquisitive Question Generation for High Level Text Comprehension [60.21497846332531]
We introduce INQUISITIVE, a dataset of 19K questions that are elicited while a person is reading through a document.
We show that readers engage in a series of pragmatic strategies to seek information.
We evaluate question generation models based on GPT-2 and show that our model is able to generate reasonable questions.
arXiv Detail & Related papers (2020-10-04T19:03:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.