AutoScrum: Automating Project Planning Using Large Language Models
- URL: http://arxiv.org/abs/2306.03197v1
- Date: Mon, 5 Jun 2023 19:16:37 GMT
- Title: AutoScrum: Automating Project Planning Using Large Language Models
- Authors: Martin Schroder
- Abstract summary: Large language models have made it possible to use language models for advanced reasoning.
In this paper we leverage this ability for designing complex project plans based only on knowing the current state and the desired state.
Two approaches are demonstrated - a scrum based approach and a shortcut plan approach.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Recent advancements in the field of large language models have made it
possible to use language models for advanced reasoning. In this paper we
leverage this ability for designing complex project plans based only on knowing
the current state and the desired state. Two approaches are demonstrated - a
scrum based approach and a shortcut plan approach. The scrum based approach
executes an automated process of requirements gathering, user story mapping,
feature identification, task decomposition and finally generates questions and
search terms for seeking out domain specific information to assist with task
completion. The shortcut approach looks at most recent snapshot of the current
and desired state and generates the next most reasonable task to do in order to
get to the desired state as quickly as possible. In this paper we automate
everything using a novel concept of "Language Programs". These are programs
written in natural language designed to process input data through the language
model. Guidance language is used for all LLM programs. All demo source code for
this paper is available at https://github.com/autoscrum/autoscrum
Related papers
- Automating Thought of Search: A Journey Towards Soundness and Completeness [20.944440404347908]
Planning remains one of the last standing bastions for large language models (LLMs)
We automate Thought of Search (ToS) completely taking the human out of the loop of solving planning problems.
We achieve 100% accuracy, with minimal feedback, using LLMs of various sizes on all evaluated domains.
arXiv Detail & Related papers (2024-08-21T04:19:52Z) - Context-aware Code Summary Generation [11.83787165247987]
Code summary generation is the task of writing natural language descriptions of a section of source code.
Recent advances in Large Language Models (LLMs) and other AI-based technologies have helped make automatic code summarization a reality.
We present an approach for including this context in recent LLM-based code summarization.
arXiv Detail & Related papers (2024-08-16T20:15:34Z) - Code-Switched Language Identification is Harder Than You Think [69.63439391717691]
Code switching is a common phenomenon in written and spoken communication.
We look at the application of building CS corpora.
We make the task more realistic by scaling it to more languages.
We reformulate the task as a sentence-level multi-label tagging problem to make it more tractable.
arXiv Detail & Related papers (2024-02-02T15:38:47Z) - Interactive Task Planning with Language Models [97.86399877812923]
An interactive robot framework accomplishes long-horizon task planning and can easily generalize to new goals or distinct tasks, even during execution.
Recent large language model based approaches can allow for more open-ended planning but often require heavy prompt engineering or domain-specific pretrained models.
We propose a simple framework that achieves interactive task planning with language models.
arXiv Detail & Related papers (2023-10-16T17:59:12Z) - AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers [20.857692296678632]
For effective human-robot interaction, robots need to understand, plan, and execute complex, long-horizon tasks.
Recent advances in large language models have shown promise for translating natural language into robot action sequences.
We show that our approach outperforms several methods using LLMs as planners in complex task domains.
arXiv Detail & Related papers (2023-06-10T21:58:29Z) - SheetCopilot: Bringing Software Productivity to the Next Level through
Large Language Models [60.171444066848856]
We propose a SheetCopilot agent that takes natural language task and control spreadsheet to fulfill the requirements.
We curate a representative dataset containing 221 spreadsheet control tasks and establish a fully automated evaluation pipeline.
Our SheetCopilot correctly completes 44.3% of tasks for a single generation, outperforming the strong code generation baseline by a wide margin.
arXiv Detail & Related papers (2023-05-30T17:59:30Z) - XTREME-UP: A User-Centric Scarce-Data Benchmark for Under-Represented
Languages [105.54207724678767]
Data scarcity is a crucial issue for the development of highly multilingual NLP systems.
We propose XTREME-UP, a benchmark defined by its focus on the scarce-data scenario rather than zero-shot.
XTREME-UP evaluates the capabilities of language models across 88 under-represented languages over 9 key user-centric technologies.
arXiv Detail & Related papers (2023-05-19T18:00:03Z) - Prompting Is Programming: A Query Language for Large Language Models [5.8010446129208155]
We present the novel idea of Language Model Programming (LMP)
LMP generalizes language model prompting from pure text prompts to an intuitive combination of text prompting and scripting.
We show that LMQL can capture a wide range of state-of-the-art prompting methods in an intuitive way.
arXiv Detail & Related papers (2022-12-12T18:09:09Z) - Language Models as Zero-Shot Planners: Extracting Actionable Knowledge
for Embodied Agents [111.33545170562337]
We investigate the possibility of grounding high-level tasks, expressed in natural language, to a chosen set of actionable steps.
We find that if pre-trained LMs are large enough and prompted appropriately, they can effectively decompose high-level tasks into low-level plans.
We propose a procedure that conditions on existing demonstrations and semantically translates the plans to admissible actions.
arXiv Detail & Related papers (2022-01-18T18:59:45Z) - Automated Source Code Generation and Auto-completion Using Deep
Learning: Comparing and Discussing Current Language-Model-Related Approaches [0.0]
This paper compares different deep learning architectures to create and use language models based on programming code.
We discuss each approach's different strengths and weaknesses and what gaps we find to evaluate the language models or apply them in a real programming context.
arXiv Detail & Related papers (2020-09-16T15:17:04Z) - Language Models as Few-Shot Learner for Task-Oriented Dialogue Systems [74.8759568242933]
Task-oriented dialogue systems use four connected modules, namely, Natural Language Understanding (NLU), a Dialogue State Tracking (DST), Dialogue Policy (DP) and Natural Language Generation (NLG)
A research challenge is to learn each module with the least amount of samples given the high cost related to the data collection.
We evaluate the priming few-shot ability of language models in the NLU, DP and NLG tasks.
arXiv Detail & Related papers (2020-08-14T08:23:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.