ChatGPT Prompt Patterns for Improving Code Quality, Refactoring,
Requirements Elicitation, and Software Design
- URL: http://arxiv.org/abs/2303.07839v1
- Date: Sat, 11 Mar 2023 14:43:17 GMT
- Title: ChatGPT Prompt Patterns for Improving Code Quality, Refactoring,
Requirements Elicitation, and Software Design
- Authors: Jules White, Sam Hays, Quchen Fu, Jesse Spencer-Smith, Douglas C.
Schmidt
- Abstract summary: This paper presents prompt design techniques for software engineering, in the form of patterns, to solve common problems when using large language models (LLMs)
First, it provides a catalog of patterns for software engineering that classifies patterns according to the types of problems they solve.
Second, it explores several prompt patterns that have been applied to improve requirements elicitation, rapid prototyping, code quality, and system design.
- Score: 1.6332728502735252
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents prompt design techniques for software engineering, in the
form of patterns, to solve common problems when using large language models
(LLMs), such as ChatGPT to automate common software engineering activities,
such as ensuring code is decoupled from third-party libraries and simulating a
web application API before it is implemented. This paper provides two
contributions to research on using LLMs for software engineering. First, it
provides a catalog of patterns for software engineering that classifies
patterns according to the types of problems they solve. Second, it explores
several prompt patterns that have been applied to improve requirements
elicitation, rapid prototyping, code quality, refactoring, and system design.
Related papers
- Requirements are All You Need: From Requirements to Code with LLMs [0.0]
Large language models (LLMs) can be applied to software engineering tasks.
This paper introduces a tailored LLM for automating the generation of code snippets from well-structured requirements documents.
We demonstrate the LLM's proficiency in comprehending intricate user requirements and producing robust design and code solutions.
arXiv Detail & Related papers (2024-06-14T14:57:35Z) - Experimenting with Multi-Agent Software Development: Towards a Unified Platform [3.3485481369444674]
Large language models are redefining software engineering by implementing AI-powered techniques throughout the whole software development process.
This study is to develop a unified platform that utilizes multiple artificial intelligence agents to automate the process of transforming user requirements into well-organized deliverables.
The platform will organize tasks, perform security and compliance, and suggest design patterns and improvements for non-functional requirements.
arXiv Detail & Related papers (2024-06-08T07:27:01Z) - DesignQA: A Multimodal Benchmark for Evaluating Large Language Models' Understanding of Engineering Documentation [3.2169312784098705]
This research introduces DesignQA, a novel benchmark aimed at evaluating the proficiency of multimodal large language models (MLLMs) in comprehending and applying engineering requirements in technical documentation.
DesignQA uniquely combines multimodal data-including textual design requirements, CAD images, and engineering drawings-derived from the Formula SAE student competition.
arXiv Detail & Related papers (2024-04-11T16:59:54Z) - LLM4EDA: Emerging Progress in Large Language Models for Electronic
Design Automation [74.7163199054881]
Large Language Models (LLMs) have demonstrated their capability in context understanding, logic reasoning and answer generation.
We present a systematic study on the application of LLMs in the EDA field.
We highlight the future research direction, focusing on applying LLMs in logic synthesis, physical design, multi-modal feature extraction and alignment of circuits.
arXiv Detail & Related papers (2023-12-28T15:09:14Z) - Large Language Models for Software Engineering: Survey and Open Problems [35.29302720251483]
This paper provides a survey of the emerging area of Large Language Models (LLMs) for Software Engineering (SE)
Our survey reveals the pivotal role that hybrid techniques (traditional SE plus LLMs) have to play in the development and deployment of reliable, efficient and effective LLM-based SE.
arXiv Detail & Related papers (2023-10-05T13:33:26Z) - FacTool: Factuality Detection in Generative AI -- A Tool Augmented
Framework for Multi-Task and Multi-Domain Scenarios [87.12753459582116]
A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models.
We propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models.
arXiv Detail & Related papers (2023-07-25T14:20:51Z) - CodeTF: One-stop Transformer Library for State-of-the-art Code LLM [72.1638273937025]
We present CodeTF, an open-source Transformer-based library for state-of-the-art Code LLMs and code intelligence.
Our library supports a collection of pretrained Code LLM models and popular code benchmarks.
We hope CodeTF is able to bridge the gap between machine learning/generative AI and software engineering.
arXiv Detail & Related papers (2023-05-31T05:24:48Z) - A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT [1.2640882896302839]
This paper provides contributions to research on prompt engineering that apply large language models (LLMs) to automate software development tasks.
It provides a framework for documenting patterns for structuring prompts to solve a range of problems so that they can be adapted to different domains.
Third, it explains how prompts can be built from multiple patterns and illustrates prompt patterns that benefit from combination with other prompt patterns.
arXiv Detail & Related papers (2023-02-21T12:42:44Z) - Enabling Automated Machine Learning for Model-Driven AI Engineering [60.09869520679979]
We propose a novel approach to enable Model-Driven Software Engineering and Model-Driven AI Engineering.
In particular, we support Automated ML, thus assisting software engineers without deep AI knowledge in developing AI-intensive systems.
arXiv Detail & Related papers (2022-03-06T10:12:56Z) - Leveraging Language to Learn Program Abstractions and Search Heuristics [66.28391181268645]
We introduce LAPS (Language for Abstraction and Program Search), a technique for using natural language annotations to guide joint learning of libraries and neurally-guided search models for synthesis.
When integrated into a state-of-the-art library learning system (DreamCoder), LAPS produces higher-quality libraries and improves search efficiency and generalization.
arXiv Detail & Related papers (2021-06-18T15:08:47Z) - Machine Learning for Software Engineering: A Systematic Mapping [73.30245214374027]
The software development industry is rapidly adopting machine learning for transitioning modern day software systems towards highly intelligent and self-learning systems.
No comprehensive study exists that explores the current state-of-the-art on the adoption of machine learning across software engineering life cycle stages.
This study introduces a machine learning for software engineering (MLSE) taxonomy classifying the state-of-the-art machine learning techniques according to their applicability to various software engineering life cycle stages.
arXiv Detail & Related papers (2020-05-27T11:56:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.