LLM and Infrastructure as a Code use case
- URL: http://arxiv.org/abs/2309.01456v2
- Date: Thu, 2 Nov 2023 09:00:49 GMT
- Title: LLM and Infrastructure as a Code use case
- Authors: Thibault Chanus (ENS Rennes), Michael Aubertin
- Abstract summary: Document presents an inquiry into a solution for generating and managing YAML roles and playbooks.
Our efforts are focused on identifying plausible directions and outlining the potential applications.
For the purpose of this experiment, we have opted against the use of Lightspeed.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cloud computing and the evolution of management methodologies such as Lean
Management or Agile entail a profound transformation in both system
construction and maintenance approaches. These practices are encompassed within
the term "DevOps." This descriptive approach to an information system or
application, alongside the configuration of its constituent components, has
necessitated the development of descriptive languages paired with specialized
engines for automating systems administration tasks. Among these, the tandem of
Ansible (engine) and YAML (descriptive language) stands out as the two most
prevalent tools in the market, facing notable competition mainly from
Terraform. The current document presents an inquiry into a solution for
generating and managing Ansible YAML roles and playbooks, utilizing Generative
LLMs (Language Models) to translate human descriptions into code. Our efforts
are focused on identifying plausible directions and outlining the potential
industrial applications. Note: For the purpose of this experiment, we have
opted against the use of Ansible Lightspeed. This is due to its reliance on an
IBM Watson model, for which we have not found any publicly available
references. Comprehensive information regarding this remarkable technology can
be found [1] directly on our partner's website, RedHat.
Related papers
- Large Action Models: From Inception to Implementation [51.81485642442344]
Large Action Models (LAMs) are designed for action generation and execution within dynamic environments.
LAMs hold the potential to transform AI from passive language understanding to active task completion.
We present a comprehensive framework for developing LAMs, offering a systematic approach to their creation, from inception to deployment.
arXiv Detail & Related papers (2024-12-13T11:19:56Z) - Specifications: The missing link to making the development of LLM systems an engineering discipline [65.10077876035417]
We discuss the progress the field has made so far-through advances like structured outputs, process supervision, and test-time compute.
We outline several future directions for research to enable the development of modular and reliable LLM-based systems.
arXiv Detail & Related papers (2024-11-25T07:48:31Z) - The Compressor-Retriever Architecture for Language Model OS [20.56093501980724]
This paper explores the concept of using a language model as the core component of an operating system (OS)
A key challenge in realizing such an LM OS is managing the life-long context and ensuring statefulness across sessions.
We introduce compressor-retriever, a model-agnostic architecture designed for life-long context management.
arXiv Detail & Related papers (2024-09-02T23:28:15Z) - Spider2-V: How Far Are Multimodal Agents From Automating Data Science and Engineering Workflows? [73.81908518992161]
We introduce Spider2-V, the first multimodal agent benchmark focusing on professional data science and engineering.
Spider2-V features real-world tasks in authentic computer environments and incorporating 20 enterprise-level professional applications.
These tasks evaluate the ability of a multimodal agent to perform data-related tasks by writing code and managing the GUI in enterprise data software systems.
arXiv Detail & Related papers (2024-07-15T17:54:37Z) - S3LLM: Large-Scale Scientific Software Understanding with LLMs using Source, Metadata, and Document [8.518000504951404]
Large language models (LLMs) provide novel pathways for understanding complex scientific codes.
S3LLM is a framework designed to enable the examination of source code, code metadata, and summarized information in an interactive, conversational manner.
S3LLM demonstrates the potential of using locally deployed open-source LLMs for the rapid understanding of large-scale scientific computing software.
arXiv Detail & Related papers (2024-03-15T17:04:27Z) - Insights from the Usage of the Ansible Lightspeed Code Completion Service [2.6401871006820534]
Lightspeed is an IT automation-specific language.
Code for Lightspeed service and the analysis framework is made available for others to use.
First code completion tool to present N-Day user retention figures.
arXiv Detail & Related papers (2024-02-27T11:57:28Z) - If LLM Is the Wizard, Then Code Is the Wand: A Survey on How Code
Empowers Large Language Models to Serve as Intelligent Agents [81.60906807941188]
Large language models (LLMs) are trained on a combination of natural language and formal language (code)
Code translates high-level goals into executable steps, featuring standard syntax, logical consistency, abstraction, and modularity.
arXiv Detail & Related papers (2024-01-01T16:51:20Z) - LanguageMPC: Large Language Models as Decision Makers for Autonomous
Driving [87.1164964709168]
This work employs Large Language Models (LLMs) as a decision-making component for complex autonomous driving scenarios.
Extensive experiments demonstrate that our proposed method not only consistently surpasses baseline approaches in single-vehicle tasks, but also helps handle complex driving behaviors even multi-vehicle coordination.
arXiv Detail & Related papers (2023-10-04T17:59:49Z) - Natural Language based Context Modeling and Reasoning for Ubiquitous
Computing with Large Language Models: A Tutorial [35.743576799998564]
Large language models (LLMs) have become phenomenally surging, since 2018--two decades after introducing context-aware computing.
In this tutorial, we demonstrate the use of texts, prompts, and autonomous agents (AutoAgents) that enable LLMs to perform context modeling and reasoning.
arXiv Detail & Related papers (2023-09-24T00:15:39Z) - CodeTF: One-stop Transformer Library for State-of-the-art Code LLM [72.1638273937025]
We present CodeTF, an open-source Transformer-based library for state-of-the-art Code LLMs and code intelligence.
Our library supports a collection of pretrained Code LLM models and popular code benchmarks.
We hope CodeTF is able to bridge the gap between machine learning/generative AI and software engineering.
arXiv Detail & Related papers (2023-05-31T05:24:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.