Prompt Sapper: A LLM-Empowered Production Tool for Building AI Chains
- URL: http://arxiv.org/abs/2306.12028v2
- Date: Wed, 20 Dec 2023 06:39:37 GMT
- Title: Prompt Sapper: A LLM-Empowered Production Tool for Building AI Chains
- Authors: Yu Cheng, Jieshan Chen, Qing Huang, Zhenchang Xing, Xiwei Xu and
Qinghua Lu
- Abstract summary: We propose the concept of AI chain and introduce the best principles and practices that have been accumulated in software engineering for decades into AI chain engineering.
We also develop a no-code integrated development environment, Prompt Sapper, which embodies these AI chain engineering principles and patterns naturally in the process of building AI chains.
- Score: 31.080896878139402
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The emergence of foundation models, such as large language models (LLMs)
GPT-4 and text-to-image models DALL-E, has opened up numerous possibilities
across various domains. People can now use natural language (i.e. prompts) to
communicate with AI to perform tasks. While people can use foundation models
through chatbots (e.g., ChatGPT), chat, regardless of the capabilities of the
underlying models, is not a production tool for building reusable AI services.
APIs like LangChain allow for LLM-based application development but require
substantial programming knowledge, thus posing a barrier. To mitigate this, we
propose the concept of AI chain and introduce the best principles and practices
that have been accumulated in software engineering for decades into AI chain
engineering, to systematise AI chain engineering methodology. We also develop a
no-code integrated development environment, Prompt Sapper, which embodies these
AI chain engineering principles and patterns naturally in the process of
building AI chains, thereby improving the performance and quality of AI chains.
With Prompt Sapper, AI chain engineers can compose prompt-based AI services on
top of foundation models through chat-based requirement analysis and visual
programming. Our user study evaluated and demonstrated the efficiency and
correctness of Prompt Sapper.
Related papers
- Programming with AI: Evaluating ChatGPT, Gemini, AlphaCode, and GitHub Copilot for Programmers [0.0]
This study presents a thorough evaluation of leading programming assistants, including ChatGPT, Gemini(Bard AI), AlphaCode, and GitHub Copilot.
It emphasizes the need for ethical developmental practices to actualize AI models' full potential.
arXiv Detail & Related papers (2024-11-14T06:40:55Z) - $π_0$: A Vision-Language-Action Flow Model for General Robot Control [77.32743739202543]
We propose a novel flow matching architecture built on top of a pre-trained vision-language model (VLM) to inherit Internet-scale semantic knowledge.
We evaluate our model in terms of its ability to perform tasks in zero shot after pre-training, follow language instructions from people, and its ability to acquire new skills via fine-tuning.
arXiv Detail & Related papers (2024-10-31T17:22:30Z) - tl;dr: Chill, y'all: AI Will Not Devour SE [5.77648992672856]
Social media provide a steady diet of dire warnings that artificial intelligence (AI) will make software engineering (SE) irrelevant or obsolete.
To the contrary, the engineering discipline of software is rich and robust.
Machine learning, large language models (LLMs) and generative AI will offer new opportunities to extend the models and methods of SE.
arXiv Detail & Related papers (2024-09-01T16:16:33Z) - Prompt Sapper: LLM-Empowered Software Engineering Infrastructure for
AI-Native Services [37.05145017386908]
Prompt Sapper is committed to support the development of AI-native services by AI chain engineering.
It creates a large language model (LLM) empowered software engineering infrastructure for authoring AI chains through human-AI collaborative intelligence.
This article will introduce the R&D motivation behind Prompt Sapper, along with its corresponding AI chain engineering methodology and technical practices.
arXiv Detail & Related papers (2023-06-04T01:47:42Z) - OpenAGI: When LLM Meets Domain Experts [51.86179657467822]
Human Intelligence (HI) excels at combining basic skills to solve complex tasks.
This capability is vital for Artificial Intelligence (AI) and should be embedded in comprehensive AI Agents.
We introduce OpenAGI, an open-source platform designed for solving multi-step, real-world tasks.
arXiv Detail & Related papers (2023-04-10T03:55:35Z) - HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging
Face [85.25054021362232]
Large language models (LLMs) have exhibited exceptional abilities in language understanding, generation, interaction, and reasoning.
LLMs could act as a controller to manage existing AI models to solve complicated AI tasks.
We present HuggingGPT, an LLM-powered agent that connects various AI models in machine learning communities.
arXiv Detail & Related papers (2023-03-30T17:48:28Z) - TaskMatrix.AI: Completing Tasks by Connecting Foundation Models with
Millions of APIs [71.7495056818522]
We introduce TaskMatrix.AI as a new AI ecosystem that connects foundation models with millions of APIs for task completion.
We will present our vision of how to build such an ecosystem, explain each key component, and use study cases to illustrate both the feasibility of this vision and the main challenges we need to address next.
arXiv Detail & Related papers (2023-03-29T03:30:38Z) - AI-in-the-Loop -- The impact of HMI in AI-based Application [0.0]
We introduce the AI-in-the-loop concept that combines AI's and humans' strengths.
By enabling HMI during an AI uses inference, we will introduce the AI-in-the-loop concept that combines AI's and humans' strengths.
arXiv Detail & Related papers (2023-03-21T00:04:33Z) - Enabling Automated Machine Learning for Model-Driven AI Engineering [60.09869520679979]
We propose a novel approach to enable Model-Driven Software Engineering and Model-Driven AI Engineering.
In particular, we support Automated ML, thus assisting software engineers without deep AI knowledge in developing AI-intensive systems.
arXiv Detail & Related papers (2022-03-06T10:12:56Z) - The MineRL BASALT Competition on Learning from Human Feedback [58.17897225617566]
The MineRL BASALT competition aims to spur forward research on this important class of techniques.
We design a suite of four tasks in Minecraft for which we expect it will be hard to write down hardcoded reward functions.
We provide a dataset of human demonstrations on each of the four tasks, as well as an imitation learning baseline.
arXiv Detail & Related papers (2021-07-05T12:18:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.