Action Engine: An LLM-based Framework for Automatic FaaS Workflow Generation
- URL: http://arxiv.org/abs/2411.19485v1
- Date: Fri, 29 Nov 2024 05:54:41 GMT
- Title: Action Engine: An LLM-based Framework for Automatic FaaS Workflow Generation
- Authors: Akiharu Esashi, Pawissanutt Lertpongrujikorn, Mohsen Amini Salehi,
- Abstract summary: We propose a mechanism called Action Engine that makes use of ToolAugmented Large Language Models (LLMs) at its kernel to interpret human language queries.
Action Engine automates F workflow generation, thereby reducing the need for specialized expertise and manual design.
Our evaluations show that Action Engine can generate with up to 20% higher correctness without developer involvement.
- Score: 1.5496299906248863
- License:
- Abstract: Function as a Service (FaaS) is poised to become the foundation of the next generation of cloud systems due to its inherent advantages in scalability, cost-efficiency, and ease of use. However, challenges such as the need for specialized knowledge and difficulties in building function workflows persist for cloud-native application developers. To overcome these challenges and mitigate the burden of developing FaaS-based applications, in this paper, we propose a mechanism called Action Engine, that makes use of Tool-Augmented Large Language Models (LLMs) at its kernel to interpret human language queries and automates FaaS workflow generation, thereby, reducing the need for specialized expertise and manual design. Action Engine includes modules to identify relevant functions from the FaaS repository and seamlessly manage the data dependency between them, ensuring that the developer's query is processed and resolved. Beyond that, Action Engine can execute the generated workflow by feeding the user-provided parameters. Our evaluations show that Action Engine can generate workflows with up to 20\% higher correctness without developer involvement. We notice that Action Engine can unlock FaaS workflow generation for non-cloud-savvy developers and expedite the development cycles of cloud-native applications.
Related papers
- Adaptive Tool Use in Large Language Models with Meta-Cognition Trigger [49.81945268343162]
We propose MeCo, an adaptive decision-making strategy for external tool use.
MeCo captures high-level cognitive signals in the representation space, guiding when to invoke tools.
Our experiments show that MeCo accurately detects LLMs' internal cognitive signals and significantly improves tool-use decision-making.
arXiv Detail & Related papers (2025-02-18T15:45:01Z) - Histrio: a Serverless Actor System [44.99833362998488]
Histrio is a programming model and execution environment that simplifies the development of stateful applications.
It lifts concerns such as state management, database interaction, and programming handling from developers.
It guarantees exactly-once-processing consistency, meaning that the application always behaves as if any interaction with external clients was processed once and only once.
arXiv Detail & Related papers (2024-10-29T06:58:56Z) - Benchmarking Agentic Workflow Generation [80.74757493266057]
We introduce WorFBench, a unified workflow generation benchmark with multi-faceted scenarios and intricate graph workflow structures.
We also present WorFEval, a systemic evaluation protocol utilizing subsequence and subgraph matching algorithms.
We observe that the generated can enhance downstream tasks, enabling them to achieve superior performance with less time during inference.
arXiv Detail & Related papers (2024-10-10T12:41:19Z) - Object as a Service: Simplifying Cloud-Native Development through Serverless Object Abstraction [1.7416288134936873]
We propose a new paradigm, known as Object as a Service (O) that encapsulates application data and functions into the cloud object abstraction.
O relieves developers from resource and data management burden while offering built-in optimization features.
We develop a platform named Oparaca that offers state abstraction for structured and unstructured data with consistency and fault-tolerant guarantees.
arXiv Detail & Related papers (2024-08-09T06:55:00Z) - AutoFlow: Automated Workflow Generation for Large Language Model Agents [39.72700864347576]
Large Language Models (LLMs) have shown significant progress in understanding complex natural language.
To make sure LLM Agents follow an effective and reliable procedure to solve the given task, manually designed are usually used.
We propose AutoFlow, a framework designed to automatically generate for agents to solve complex tasks.
arXiv Detail & Related papers (2024-07-01T21:05:02Z) - Couler: Unified Machine Learning Workflow Optimization in Cloud [6.769259207650922]
Couler is a system designed for unified ML workflow optimization in the cloud.
We integrate Large Language Models (LLMs) into workflow generation, and provide a unified programming interface for various workflow engines.
Couer has successfully improved the CPU/Memory utilization by more than 15% and the workflow completion rate by around 17%.
arXiv Detail & Related papers (2024-03-12T12:47:32Z) - TaskWeaver: A Code-First Agent Framework [50.99683051759488]
TaskWeaver is a code-first framework for building LLM-powered autonomous agents.
It converts user requests into executable code and treats user-defined plugins as callable functions.
It provides support for rich data structures, flexible plugin usage, and dynamic plugin selection.
arXiv Detail & Related papers (2023-11-29T11:23:42Z) - CRAFT: Customizing LLMs by Creating and Retrieving from Specialized
Toolsets [75.64181719386497]
We present CRAFT, a tool creation and retrieval framework for large language models (LLMs)
It creates toolsets specifically curated for the tasks and equips LLMs with a component that retrieves tools from these sets to enhance their capability to solve complex tasks.
Our method is designed to be flexible and offers a plug-and-play approach to adapt off-the-shelf LLMs to unseen domains and modalities, without any finetuning.
arXiv Detail & Related papers (2023-09-29T17:40:26Z) - Object as a Service (OaaS): Enabling Object Abstraction in Serverless
Clouds [2.0575037267955305]
We propose a new abstraction level atop the function abstraction, known as Object as a Service (O) programming.
O encapsulates the application data and function into the object abstraction and relieves the developers from resource and data management burdens.
It also unlocks opportunities for built-in optimization features, such as software reusability, data locality, and caching.
arXiv Detail & Related papers (2022-06-10T21:31:22Z) - SOLIS -- The MLOps journey from data acquisition to actionable insights [62.997667081978825]
In this paper we present a unified deployment pipeline and freedom-to-operate approach that supports all requirements while using basic cross-platform tensor framework and script language engines.
This approach however does not supply the needed procedures and pipelines for the actual deployment of machine learning capabilities in real production grade systems.
arXiv Detail & Related papers (2021-12-22T14:45:37Z) - Learning Discrete Energy-based Models via Auxiliary-variable Local
Exploration [130.89746032163106]
We propose ALOE, a new algorithm for learning conditional and unconditional EBMs for discrete structured data.
We show that the energy function and sampler can be trained efficiently via a new variational form of power iteration.
We present an energy model guided fuzzer for software testing that achieves comparable performance to well engineered fuzzing engines like libfuzzer.
arXiv Detail & Related papers (2020-11-10T19:31:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.