Efficacy of Various Large Language Models in Generating Smart Contracts
- URL: http://arxiv.org/abs/2407.11019v2
- Date: Tue, 5 Nov 2024 16:21:55 GMT
- Title: Efficacy of Various Large Language Models in Generating Smart Contracts
- Authors: Siddhartha Chatterjee, Bina Ramamurthy,
- Abstract summary: This study analyzes the application of code-generating Large Language Models in the creation of Solidity smart contracts on the immutable.
We also discovered a novel way of generating smart contracts through prompting new strategies.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study analyzes the application of code-generating Large Language Models in the creation of immutable Solidity smart contracts on the Ethereum Blockchain. Other works have previously analyzed Artificial Intelligence code generation abilities. This paper aims to expand this to a larger scope to include programs where security and efficiency are of utmost priority such as smart contracts. The hypothesis leading into the study was that LLMs in general would have difficulty in rigorously implementing security details in the code, which was shown through our results, but surprisingly generally succeeded in many common types of contracts. We also discovered a novel way of generating smart contracts through new prompting strategies.
Related papers
- Soley: Identification and Automated Detection of Logic Vulnerabilities in Ethereum Smart Contracts Using Large Language Models [1.081463830315253]
We empirically investigate logic vulnerabilities in real-world smart contracts extracted from code changes on GitHub.
We introduce Soley, an automated method for detecting logic vulnerabilities in smart contracts.
We examine mitigation strategies employed by smart contract developers to address these vulnerabilities in real-world scenarios.
arXiv Detail & Related papers (2024-06-24T00:15:18Z) - Vulnerabilities of smart contracts and mitigation schemes: A Comprehensive Survey [0.6554326244334866]
This paper presents a literature review combined with an experimental report that aims to assist developers in developing secure smarts.
It provides a list of frequent vulnerabilities and corresponding mitigation solutions.
It evaluates the community most widely used tools by executing and testing them on sample smart contracts.
arXiv Detail & Related papers (2024-03-28T19:36:53Z) - CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion [117.178835165855]
This paper introduces CodeAttack, a framework that transforms natural language inputs into code inputs.
Our studies reveal a new and universal safety vulnerability of these models against code input.
We find that a larger distribution gap between CodeAttack and natural language leads to weaker safety generalization.
arXiv Detail & Related papers (2024-03-12T17:55:38Z) - Hot or Cold? Adaptive Temperature Sampling for Code Generation with
Large Language Models [54.72004797421481]
We conduct the first systematic study to explore a decoding strategy specialized in code generation.
Inspired by the above findings, we propose a simple yet effective method: Adaptive Temperature (AdapT) sampling.
Results show that AdapT sampling significantly outperforms state-of-the-art decoding strategy.
arXiv Detail & Related papers (2023-09-06T06:27:33Z) - An Empirical Study of AI-based Smart Contract Creation [4.801455786801489]
Large language models (LLMs) like ChatGPT and Google Palm2 for smart contract generation seem to be the first well-established instance of an AI pair programmer.
The main objective of this study is to assess the quality of generated code provided by LLMs for smart contracts.
arXiv Detail & Related papers (2023-08-05T21:38:57Z) - A LLM Assisted Exploitation of AI-Guardian [57.572998144258705]
We evaluate the robustness of AI-Guardian, a recent defense to adversarial examples published at IEEE S&P 2023.
We write none of the code to attack this model, and instead prompt GPT-4 to implement all attack algorithms following our instructions and guidance.
This process was surprisingly effective and efficient, with the language model at times producing code from ambiguous instructions faster than the author of this paper could have done.
arXiv Detail & Related papers (2023-07-20T17:33:25Z) - HyMo: Vulnerability Detection in Smart Contracts using a Novel
Multi-Modal Hybrid Model [1.16095700765361]
Existing analysis techniques are capable of identifying a large number of smart contract security flaws, but they rely too much on rigid criteria established by specialists.
We propose HyMo as a multi-modal hybrid deep learning model, which intelligently considers various input representations to consider multimodality.
We show that our hybrid HyMo model has excellent smart contract vulnerability detection performance.
arXiv Detail & Related papers (2023-04-25T19:16:21Z) - CodeLMSec Benchmark: Systematically Evaluating and Finding Security
Vulnerabilities in Black-Box Code Language Models [58.27254444280376]
Large language models (LLMs) for automatic code generation have achieved breakthroughs in several programming tasks.
Training data for these models is usually collected from the Internet (e.g., from open-source repositories) and is likely to contain faults and security vulnerabilities.
This unsanitized training data can cause the language models to learn these vulnerabilities and propagate them during the code generation procedure.
arXiv Detail & Related papers (2023-02-08T11:54:07Z) - Smart Contract Vulnerability Detection: From Pure Neural Network to
Interpretable Graph Feature and Expert Pattern Fusion [48.744359070088166]
Conventional smart contract vulnerability detection methods heavily rely on fixed expert rules.
Recent deep learning approaches alleviate this issue but fail to encode useful expert knowledge.
We develop automatic tools to extract expert patterns from the source code.
We then cast the code into a semantic graph to extract deep graph features.
arXiv Detail & Related papers (2021-06-17T07:12:13Z) - A Bytecode-based Approach for Smart Contract Classification [10.483992071557195]
The number of smart contracts deployed on blockchain platforms is growing exponentially, which makes it difficult for users to find desired services by manual screening.
Current research on smart contract classification focuses on Natural Language Processing (NLP) solutions which are based on contract source code.
This paper proposes a classification model based on features from contract bytecode instead of source code to solve these problems.
arXiv Detail & Related papers (2021-05-31T03:00:29Z) - ESCORT: Ethereum Smart COntRacTs Vulnerability Detection using Deep
Neural Network and Transfer Learning [80.85273827468063]
Existing machine learning-based vulnerability detection methods are limited and only inspect whether the smart contract is vulnerable.
We propose ESCORT, the first Deep Neural Network (DNN)-based vulnerability detection framework for smart contracts.
We show that ESCORT achieves an average F1-score of 95% on six vulnerability types and the detection time is 0.02 seconds per contract.
arXiv Detail & Related papers (2021-03-23T15:04:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.