LLMs in Web Development: Evaluating LLM-Generated PHP Code Unveiling Vulnerabilities and Limitations
- URL: http://arxiv.org/abs/2404.14459v2
- Date: Tue, 21 May 2024 13:10:39 GMT
- Title: LLMs in Web Development: Evaluating LLM-Generated PHP Code Unveiling Vulnerabilities and Limitations
- Authors: Rebeka Tóth, Tamas Bisztray, László Erdodi,
- Abstract summary: This study evaluates the security of web application code generated by Large Language Models, analyzing 2,500 GPT-4 generated PHP websites.
Our investigation focuses on identifying Insecure File Upload,sql Injection, Stored XSS, and Reflected XSS in GPT-4 generated PHP code.
According to Burp's Scan, 11.56% of the sites can be straight out compromised. Adding static scan results, 26% had at least one vulnerability that can be exploited through web interaction.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study evaluates the security of web application code generated by Large Language Models, analyzing 2,500 GPT-4 generated PHP websites. These were deployed in Docker containers and tested for vulnerabilities using a hybrid approach of Burp Suite active scanning, static analysis, and manual review. Our investigation focuses on identifying Insecure File Upload, SQL Injection, Stored XSS, and Reflected XSS in GPT-4 generated PHP code. This analysis highlights potential security risks and the implications of deploying such code in real-world scenarios. Overall, our analysis found 2,440 vulnerable parameters. According to Burp's Scan, 11.56% of the sites can be straight out compromised. Adding static scan results, 26% had at least one vulnerability that can be exploited through web interaction. Certain coding scenarios, like file upload functionality, are insecure 78% of the time, underscoring significant risks to software safety and security. To support further research, we have made the source codes and a detailed vulnerability record for each sample publicly available. This study emphasizes the crucial need for thorough testing and evaluation if generative AI technologies are used in software development.
Related papers
- Yama: Precise Opcode-based Data Flow Analysis for Detecting PHP Applications Vulnerabilities [4.262259005587605]
Yama is a context-sensitive and path-sensitive interprocedural data flow analysis method for PHP.
We have found that the precise semantics and clear control flow of PHP opcodes enable data flow analysis to be more precise and efficient.
We evaluated Yama from three dimensions: basic data flow analysis capabilities, complex semantic analysis capabilities, and the ability to discover vulnerabilities in real-world applications.
arXiv Detail & Related papers (2024-10-16T08:14:37Z) - SecCodePLT: A Unified Platform for Evaluating the Security of Code GenAI [47.11178028457252]
We develop SecCodePLT, a unified and comprehensive evaluation platform for code GenAIs' risks.
For insecure code, we introduce a new methodology for data creation that combines experts with automatic generation.
For cyberattack helpfulness, we construct samples to prompt a model to generate actual attacks, along with dynamic metrics in our environment.
arXiv Detail & Related papers (2024-10-14T21:17:22Z) - RealVul: Can We Detect Vulnerabilities in Web Applications with LLM? [4.467475584754677]
We present RealVul, the first LLM-based framework designed for PHP vulnerability detection.
We can isolate potential vulnerability triggers while streamlining the code and eliminating unnecessary semantic information.
We also address the issue of insufficient PHP vulnerability samples by improving data synthesis methods.
arXiv Detail & Related papers (2024-10-10T03:16:34Z) - Vulnerability Scanners for Ethereum Smart Contracts: A Large-Scale Study [44.25093111430751]
In 2023 alone, such vulnerabilities led to substantial financial losses exceeding a billion of US dollars.
Various tools have been developed to detect and mitigate vulnerabilities in smart contracts.
This study investigates the gap between the effectiveness of existing security scanners and the vulnerabilities that still persist in practice.
arXiv Detail & Related papers (2023-12-27T11:26:26Z) - How well does LLM generate security tests? [8.454827764115631]
Developers often build software on top of third-party libraries (Libs) to improve productivity and software quality.
People refer to such attacks as supply chain attacks, the documented number of which has increased 742% in 2022.
We used ChatGPT-4.0 to generate security tests, and to demonstrate how vulnerable library dependencies facilitate the supply chain attacks to given Apps.
arXiv Detail & Related papers (2023-10-01T16:00:58Z) - Can Large Language Models Find And Fix Vulnerable Software? [0.0]
GPT-4 identified approximately four times the vulnerabilities than its counterparts.
It provided viable fixes for each vulnerability, demonstrating a low rate of false positives.
GPT-4's code corrections led to a 90% reduction in vulnerabilities, requiring only an 11% increase in code lines.
arXiv Detail & Related papers (2023-08-20T19:33:12Z) - Vulnerability Detection Using Two-Stage Deep Learning Models [0.0]
Two deep learning models were proposed for vulnerability detection in C/C++ source codes.
The first stage is CNN which detects if the source code contains any vulnerability.
The second stage is CNN-LTSM that classifies this vulnerability into a class of 50 different types of vulnerabilities.
arXiv Detail & Related papers (2023-05-08T22:12:34Z) - CodeLMSec Benchmark: Systematically Evaluating and Finding Security
Vulnerabilities in Black-Box Code Language Models [58.27254444280376]
Large language models (LLMs) for automatic code generation have achieved breakthroughs in several programming tasks.
Training data for these models is usually collected from the Internet (e.g., from open-source repositories) and is likely to contain faults and security vulnerabilities.
This unsanitized training data can cause the language models to learn these vulnerabilities and propagate them during the code generation procedure.
arXiv Detail & Related papers (2023-02-08T11:54:07Z) - VELVET: a noVel Ensemble Learning approach to automatically locate
VulnErable sTatements [62.93814803258067]
This paper presents VELVET, a novel ensemble learning approach to locate vulnerable statements in source code.
Our model combines graph-based and sequence-based neural networks to successfully capture the local and global context of a program graph.
VELVET achieves 99.6% and 43.6% top-1 accuracy over synthetic data and real-world data, respectively.
arXiv Detail & Related papers (2021-12-20T22:45:27Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Autosploit: A Fully Automated Framework for Evaluating the
Exploitability of Security Vulnerabilities [47.748732208602355]
Autosploit is an automated framework for evaluating the exploitability of vulnerabilities.
It automatically tests the exploits on different configurations of the environment.
It is able to identify the system properties that affect the ability to exploit a vulnerability in both noiseless and noisy environments.
arXiv Detail & Related papers (2020-06-30T18:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.