Improving Ontology Requirements Engineering with OntoChat and Participatory Prompting
- URL: http://arxiv.org/abs/2408.15256v3
- Date: Wed, 18 Sep 2024 16:09:40 GMT
- Title: Improving Ontology Requirements Engineering with OntoChat and Participatory Prompting
- Authors: Yihang Zhao, Bohui Zhang, Xi Hu, Shuyin Ouyang, Jongmo Kim, Nitisha Jain, Jacopo de Berardinis, Albert Meroño-Peñuela, Elena Simperl,
- Abstract summary: ORE has primarily relied on manual methods, such as interviews and collaborative forums, to gather user requirements from domain experts.
Current OntoChat offers a framework for ORE that utilise large language models (LLMs) to streamline the process.
This study produces pre-defined prompt templates based on user queries, focusing on creating and refining personas, goals, scenarios, sample data, and data resources for user stories.
- Score: 3.3241053483599563
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Past ontology requirements engineering (ORE) has primarily relied on manual methods, such as interviews and collaborative forums, to gather user requirements from domain experts, especially in large projects. Current OntoChat offers a framework for ORE that utilises large language models (LLMs) to streamline the process through four key functions: user story creation, competency question (CQ) extraction, CQ filtration and analysis, and ontology testing support. In OntoChat, users are expected to prompt the chatbot to generate user stories. However, preliminary evaluations revealed that they struggle to do this effectively. To address this issue, we experimented with a research method called participatory prompting, which involves researcher-mediated interactions to help users without deep knowledge of LLMs use the chatbot more effectively. This participatory prompting user study produces pre-defined prompt templates based on user queries, focusing on creating and refining personas, goals, scenarios, sample data, and data resources for user stories. These refined user stories will subsequently be converted into CQs.
Related papers
- An Approach for Auto Generation of Labeling Functions for Software Engineering Chatbots [3.1911318265930944]
We propose an approach to automatically generate labeling functions (LFs) by extracting patterns from labeled user queries.
We evaluate the effectiveness of our approach by applying it to the queries of four diverse SE datasets.
arXiv Detail & Related papers (2024-10-09T17:34:14Z) - Into the Unknown Unknowns: Engaged Human Learning through Participation in Language Model Agent Conversations [8.848859080368799]
Collaborative STORM lets users observe and steer the discourse among several LM agents.
The agents ask questions on the user's behalf, allowing the user to discover unknown unknowns serendipitously.
For automatic evaluation, we construct the WildSeek dataset by collecting real information-seeking records with user goals.
arXiv Detail & Related papers (2024-08-27T17:50:03Z) - Conversational Prompt Engineering [14.614531952444013]
We propose Conversational Prompt Engineering (CPE), a user-friendly tool that helps users create personalized prompts for their specific tasks.
CPE uses a chat model to briefly interact with users, helping them articulate their output preferences and integrating these into the prompt.
A user study on summarization tasks demonstrates the value of CPE in creating personalized, high-performing prompts.
arXiv Detail & Related papers (2024-08-08T16:18:39Z) - OntoChat: a Framework for Conversational Ontology Engineering using Language Models [0.3141085922386211]
We introduce textbfOntoChat, a framework for conversational engineering that supports requirement elicitation, analysis, and testing.
We evaluate OntoChat by replicating the engineering of the Music Meta Ontology, and collecting preliminary metrics on the effectiveness of each component from users.
arXiv Detail & Related papers (2024-03-09T14:04:06Z) - Beyond ChatBots: ExploreLLM for Structured Thoughts and Personalized
Model Responses [35.74453152447319]
ExploreLLM allows users to structure thoughts, help explore different options, navigate through the choices and recommendations, and to more easily steer models to generate more personalized responses.
We conduct a user study and show that users find it helpful to use ExploreLLM for exploratory or planning tasks, because it provides a useful schema-like structure to the task, and guides users in planning.
The study also suggests that users can more easily personalize responses with high-level preferences with ExploreLLM.
arXiv Detail & Related papers (2023-12-01T18:31:28Z) - Cache & Distil: Optimising API Calls to Large Language Models [82.32065572907125]
Large-scale deployment of generative AI tools often depends on costly API calls to a Large Language Model (LLM) to fulfil user queries.
To curtail the frequency of these calls, one can employ a smaller language model -- a student.
This student gradually gains proficiency in independently handling an increasing number of user requests.
arXiv Detail & Related papers (2023-10-20T15:01:55Z) - The Shifted and The Overlooked: A Task-oriented Investigation of
User-GPT Interactions [114.67699010359637]
We analyze a large-scale collection of real user queries to GPT.
We find that tasks such as design'' and planning'' are prevalent in user interactions but are largely neglected or different from traditional NLP benchmarks.
arXiv Detail & Related papers (2023-10-19T02:12:17Z) - Search-Engine-augmented Dialogue Response Generation with Cheaply
Supervised Query Production [98.98161995555485]
We propose a dialogue model that can access the vast and dynamic information from any search engine for response generation.
As the core module, a query producer is used to generate queries from a dialogue context to interact with a search engine.
Experiments show that our query producer can achieve R@1 and R@5 rates of 62.4% and 74.8% for retrieving gold knowledge.
arXiv Detail & Related papers (2023-02-16T01:58:10Z) - Disentangling Online Chats with DAG-Structured LSTMs [55.33014148383343]
DAG-LSTMs are a generalization of Tree-LSTMs that can handle directed acyclic dependencies.
We show that the novel model we propose achieves state of the art status on the task of recovering reply-to relations.
arXiv Detail & Related papers (2021-06-16T18:00:00Z) - Mining Implicit Relevance Feedback from User Behavior for Web Question
Answering [92.45607094299181]
We make the first study to explore the correlation between user behavior and passage relevance.
Our approach significantly improves the accuracy of passage ranking without extra human labeled data.
In practice, this work has proved effective to substantially reduce the human labeling cost for the QA service in a global commercial search engine.
arXiv Detail & Related papers (2020-06-13T07:02:08Z) - Conversations with Search Engines: SERP-based Conversational Response
Generation [77.1381159789032]
We create a suitable dataset, the Search as a Conversation (SaaC) dataset, for the development of pipelines for conversations with search engines.
We also develop a state-of-the-art pipeline for conversations with search engines, the Conversations with Search Engines (CaSE) using this dataset.
CaSE enhances the state-of-the-art by introducing a supporting token identification module and aprior-aware pointer generator.
arXiv Detail & Related papers (2020-04-29T13:07:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.