Comparing Generative Chatbots Based on Process Requirements
- URL: http://arxiv.org/abs/2312.03741v1
- Date: Tue, 28 Nov 2023 18:25:22 GMT
- Title: Comparing Generative Chatbots Based on Process Requirements
- Authors: Luis Fernando Lins, Nathalia Nascimento, Paulo Alencar, Toacy
Oliveira, Donald Cowan
- Abstract summary: Generative-based chatbots are trained on billions of parameters and support conversational intelligence.
This paper compares the performance of prominent generative models, GPT and PaLM, in the context of process execution support.
- Score: 2.645089622684808
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Business processes are commonly represented by modelling languages, such as
Event-driven Process Chain (EPC), Yet Another Workflow Language (YAWL), and the
most popular standard notation for modelling business processes, the Business
Process Model and Notation (BPMN). Most recently, chatbots, programs that allow
users to interact with a machine using natural language, have been increasingly
used for business process execution support. A recent category of chatbots
worth mentioning is generative-based chatbots, powered by Large Language Models
(LLMs) such as OpenAI's Generative Pre-Trained Transformer (GPT) model and
Google's Pathways Language Model (PaLM), which are trained on billions of
parameters and support conversational intelligence. However, it is not clear
whether generative-based chatbots are able to understand and meet the
requirements of constructs such as those provided by BPMN for process execution
support. This paper presents a case study to compare the performance of
prominent generative models, GPT and PaLM, in the context of process execution
support. The research sheds light into the challenging problem of using
conversational approaches supported by generative chatbots as a means to
understand process-aware modelling notations and support users to execute their
tasks.
Related papers
- Computational Argumentation-based Chatbots: a Survey [0.4024850952459757]
The present survey sifts through the literature to review papers concerning this kind of argumentation-based bot.
It draws conclusions about the drawbacks and benefits of this approach.
It also envisaging possible future development and integration with the Transformer-based architecture and state-of-the-art Large Language models.
arXiv Detail & Related papers (2024-01-07T11:20:42Z) - Interactive Planning Using Large Language Models for Partially
Observable Robotics Tasks [54.60571399091711]
Large Language Models (LLMs) have achieved impressive results in creating robotic agents for performing open vocabulary tasks.
We present an interactive planning technique for partially observable tasks using LLMs.
arXiv Detail & Related papers (2023-12-11T22:54:44Z) - Large Language Models as Analogical Reasoners [155.9617224350088]
Chain-of-thought (CoT) prompting for language models demonstrates impressive performance across reasoning tasks.
We introduce a new prompting approach, analogical prompting, designed to automatically guide the reasoning process of large language models.
arXiv Detail & Related papers (2023-10-03T00:57:26Z) - Benchmarking Large Language Model Capabilities for Conditional
Generation [15.437176676169997]
We discuss how to adapt existing application-specific generation benchmarks to PLMs.
We show that PLMs differ in their applicability to different data regimes and their generalization to multiple languages.
arXiv Detail & Related papers (2023-06-29T08:59:40Z) - Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions
with Large Language Model [63.66204449776262]
Instruct2Act is a framework that maps multi-modal instructions to sequential actions for robotic manipulation tasks.
Our approach is adjustable and flexible in accommodating various instruction modalities and input types.
Our zero-shot method outperformed many state-of-the-art learning-based policies in several tasks.
arXiv Detail & Related papers (2023-05-18T17:59:49Z) - Prompted LLMs as Chatbot Modules for Long Open-domain Conversation [7.511596831927614]
We propose MPC, a new approach for creating high-quality conversational agents without the need for fine-tuning.
Our method utilizes pre-trained large language models (LLMs) as individual modules for long-term consistency and flexibility.
arXiv Detail & Related papers (2023-05-08T08:09:00Z) - Conversational Process Modeling: Can Generative AI Empower Domain
Experts in Creating and Redesigning Process Models? [0.0]
This work provides a systematic analysis of existing chatbots for support of conversational process modeling.
A literature review on conversational process modeling is performed, resulting in a taxonomy of application scenarios for conversational process modeling.
An evaluation method is applied for the output of AI-driven chatbots with respect to completeness and correctness of the process models.
arXiv Detail & Related papers (2023-04-19T06:54:14Z) - Stabilized In-Context Learning with Pre-trained Language Models for Few
Shot Dialogue State Tracking [57.92608483099916]
Large pre-trained language models (PLMs) have shown impressive unaided performance across many NLP tasks.
For more complex tasks such as dialogue state tracking (DST), designing prompts that reliably convey the desired intent is nontrivial.
We introduce a saliency model to limit dialogue text length, allowing us to include more exemplars per query.
arXiv Detail & Related papers (2023-02-12T15:05:10Z) - An Exploration of Prompt Tuning on Generative Spoken Language Model for
Speech Processing Tasks [112.1942546460814]
We report the first exploration of the prompt tuning paradigm for speech processing tasks based on Generative Spoken Language Model (GSLM)
Experiment results show that the prompt tuning technique achieves competitive performance in speech classification tasks with fewer trainable parameters than fine-tuning specialized downstream models.
arXiv Detail & Related papers (2022-03-31T03:26:55Z) - A Conversational Paradigm for Program Synthesis [110.94409515865867]
We propose a conversational program synthesis approach via large language models.
We train a family of large language models, called CodeGen, on natural language and programming language data.
Our findings show the emergence of conversational capabilities and the effectiveness of the proposed conversational program synthesis paradigm.
arXiv Detail & Related papers (2022-03-25T06:55:15Z) - Prompt Programming for Large Language Models: Beyond the Few-Shot
Paradigm [0.0]
We discuss methods of prompt programming, emphasizing the usefulness of considering prompts through the lens of natural language.
We introduce the idea of a metaprompt that seeds the model to generate its own natural language prompts for a range of tasks.
arXiv Detail & Related papers (2021-02-15T05:27:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.