Workflow Discovery from Dialogues in the Low Data Regime
- URL: http://arxiv.org/abs/2205.11690v1
- Date: Tue, 24 May 2022 01:12:03 GMT
- Title: Workflow Discovery from Dialogues in the Low Data Regime
- Authors: Amine El Hattami, Stefania Raimondo, Issam Laradji, David Vazquez, Pau
Rodriguez, Chris Pal
- Abstract summary: We present experiments where we summarize dialogues in the ActionBased Conversations dataset with conditioning.
We propose and evaluate an approach that conditions models on the set of allowable action steps.
Our approach also improves zero-shot and few-shot WD performance when transferring learned models to entirely new domains.
- Score: 13.14503978966984
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Text-based dialogues are now widely used to solve real-world problems. In
cases where solution strategies are already known, they can sometimes be
codified into workflows and used to guide humans or artificial agents through
the task of helping clients. We are interested in the situation where a formal
workflow may not yet exist, but we wish to discover the steps of actions that
have been taken to resolve problems. We examine a novel transformer-based
approach for this situation and we present experiments where we summarize
dialogues in the Action-Based Conversations Dataset (ABCD) with workflows.
Since the ABCD dialogues were generated using known workflows to guide agents
we can evaluate our ability to extract such workflows using ground truth
sequences of action steps, organized as workflows. We propose and evaluate an
approach that conditions models on the set of allowable action steps and we
show that using this strategy we can improve workflow discovery (WD)
performance. Our conditioning approach also improves zero-shot and few-shot WD
performance when transferring learned models to entirely new domains (i.e. the
MultiWOZ setting). Further, a modified variant of our architecture achieves
state-of-the-art performance on the related but different problems of Action
State Tracking (AST) and Cascading Dialogue Success (CDS) on the ABCD.
Related papers
- Learning Task Representations from In-Context Learning [73.72066284711462]
Large language models (LLMs) have demonstrated remarkable proficiency in in-context learning.
We introduce an automated formulation for encoding task information in ICL prompts as a function of attention heads.
We show that our method's effectiveness stems from aligning the distribution of the last hidden state with that of an optimally performing in-context-learned model.
arXiv Detail & Related papers (2025-02-08T00:16:44Z) - Flow: A Modular Approach to Automated Agentic Workflow Generation [53.073598156915615]
Multi-agent frameworks powered by large language models (LLMs) have demonstrated great success in automated planning and task execution.
However, the effective adjustment of Agentic during execution has not been well-studied.
arXiv Detail & Related papers (2025-01-14T04:35:37Z) - Benchmarking Agentic Workflow Generation [80.74757493266057]
We introduce WorFBench, a unified workflow generation benchmark with multi-faceted scenarios and intricate graph workflow structures.
We also present WorFEval, a systemic evaluation protocol utilizing subsequence and subgraph matching algorithms.
We observe that the generated can enhance downstream tasks, enabling them to achieve superior performance with less time during inference.
arXiv Detail & Related papers (2024-10-10T12:41:19Z) - ComfyGen: Prompt-Adaptive Workflows for Text-to-Image Generation [87.39861573270173]
We introduce the novel task of prompt-adaptive workflow generation, where the goal is to automatically tailor a workflow to each user prompt.
We propose two LLM-based approaches to tackle this task: a tuning-based method that learns from user-preference data, and a training-free method that uses the LLM to select existing flows.
Our work shows that prompt-dependent flow prediction offers a new pathway to improving text-to-image generation quality, complementing existing research directions in the field.
arXiv Detail & Related papers (2024-10-02T16:43:24Z) - FlowBench: Revisiting and Benchmarking Workflow-Guided Planning for LLM-based Agents [64.1759086221016]
We present FlowBench, the first benchmark for workflow-guided planning.
FlowBench covers 51 different scenarios from 6 domains, with knowledge presented in diverse formats.
Results indicate that current LLM agents need considerable improvements for satisfactory planning.
arXiv Detail & Related papers (2024-06-21T06:13:00Z) - Workflow-Guided Response Generation for Task-Oriented Dialogue [4.440232673676693]
We propose a novel framework based on reinforcement learning (RL) to generate dialogue responses that are aligned with a given workflow.
Our framework consists of ComplianceScorer, a metric designed to evaluate how well a generated response executes the specified action.
Our findings indicate that our RL-based framework outperforms baselines and is effective at enerating responses that both comply with the intended while being expressed in a natural and fluent manner.
arXiv Detail & Related papers (2023-11-14T16:44:33Z) - Leveraging Explicit Procedural Instructions for Data-Efficient Action
Prediction [5.448684866061922]
Task-oriented dialogues often require agents to enact complex, multi-step procedures in order to meet user requests.
Large language models have found success automating these dialogues in constrained environments, but their widespread deployment is limited by the substantial quantities of task-specific data required for training.
This paper presents a data-efficient solution to constructing dialogue systems, leveraging explicit instructions derived from agent guidelines.
arXiv Detail & Related papers (2023-06-06T18:42:08Z) - Improving Generalization in Task-oriented Dialogues with Workflows and
Action Plans [1.0499611180329804]
Task-oriented dialogue is difficult in part because it involves understanding user intent, collecting information from the user, executing API calls, and generating fluent responses.
We show that large pre-trained language models can be fine-tuned end-to-end to create multi-step task-oriented dialogue agents.
Our experiments confirm that this approach alone cannot reliably perform new multi-step tasks that are unseen during training.
arXiv Detail & Related papers (2023-06-02T17:54:36Z) - In-Context Learning for Few-Shot Dialogue State Tracking [55.91832381893181]
We propose an in-context (IC) learning framework for few-shot dialogue state tracking (DST)
A large pre-trained language model (LM) takes a test instance and a few annotated examples as input, and directly decodes the dialogue states without any parameter updates.
This makes the LM more flexible and scalable compared to prior few-shot DST work when adapting to new domains and scenarios.
arXiv Detail & Related papers (2022-03-16T11:58:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.