ZeroLeak: Using LLMs for Scalable and Cost Effective Side-Channel
Patching
- URL: http://arxiv.org/abs/2308.13062v1
- Date: Thu, 24 Aug 2023 20:04:36 GMT
- Title: ZeroLeak: Using LLMs for Scalable and Cost Effective Side-Channel
Patching
- Authors: M. Caner Tol and Berk Sunar
- Abstract summary: Security critical software, e.g., OpenSSL, comes with numerous side-channel leakages left unpatched due to a lack of resources or experts.
We explore the use of Large Language Models (LLMs) in generating patches for vulnerable code with microarchitectural side-channel leakages.
- Score: 6.556868623811133
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Security critical software, e.g., OpenSSL, comes with numerous side-channel
leakages left unpatched due to a lack of resources or experts. The situation
will only worsen as the pace of code development accelerates, with developers
relying on Large Language Models (LLMs) to automatically generate code. In this
work, we explore the use of LLMs in generating patches for vulnerable code with
microarchitectural side-channel leakages. For this, we investigate the
generative abilities of powerful LLMs by carefully crafting prompts following a
zero-shot learning approach. All generated code is dynamically analyzed by
leakage detection tools, which are capable of pinpointing information leakage
at the instruction level leaked either from secret dependent accesses or
branches or vulnerable Spectre gadgets, respectively. Carefully crafted prompts
are used to generate candidate replacements for vulnerable code, which are then
analyzed for correctness and for leakage resilience. From a cost/performance
perspective, the GPT4-based configuration costs in API calls a mere few cents
per vulnerability fixed. Our results show that LLM-based patching is far more
cost-effective and thus provides a scalable solution. Finally, the framework we
propose will improve in time, especially as vulnerability detection tools and
LLMs mature.
Related papers
- Exploring Automatic Cryptographic API Misuse Detection in the Era of LLMs [60.32717556756674]
This paper introduces a systematic evaluation framework to assess Large Language Models in detecting cryptographic misuses.
Our in-depth analysis of 11,940 LLM-generated reports highlights that the inherent instabilities in LLMs can lead to over half of the reports being false positives.
The optimized approach achieves a remarkable detection rate of nearly 90%, surpassing traditional methods and uncovering previously unknown misuses in established benchmarks.
arXiv Detail & Related papers (2024-07-23T15:31:26Z) - SCoPE: Evaluating LLMs for Software Vulnerability Detection [0.0]
This work explores and refines the CVEFixes dataset, which is commonly used to train models for code-related tasks.
The output generated by SCoPE was used to create a new version of CVEFixes.
The results show that SCoPE successfully helped to identify 905 duplicates within the evaluated subset.
arXiv Detail & Related papers (2024-07-19T15:02:00Z) - Towards Effectively Detecting and Explaining Vulnerabilities Using Large Language Models [17.96542494363619]
Large language models (LLMs) have shown a remarkable capability in the comprehension of complicated context and content generation.
We propose LLMVulExp, a framework that utilizes LLMs for vulnerability detection and explanation.
We find that LLMVulExp can effectively enable the LLMs to perform vulnerability detection (e.g., over 90% F1 score on SeVC dataset) and explanation.
arXiv Detail & Related papers (2024-06-14T04:01:25Z) - Harnessing Large Language Models for Software Vulnerability Detection: A Comprehensive Benchmarking Study [1.03590082373586]
We propose using large language models (LLMs) to assist in finding vulnerabilities in source code.
The aim is to test multiple state-of-the-art LLMs and identify the best prompting strategies.
We find that LLMs can pinpoint many more issues than traditional static analysis tools, outperforming traditional tools in terms of recall and F1 scores.
arXiv Detail & Related papers (2024-05-24T14:59:19Z) - Investigating the prompt leakage effect and black-box defenses for multi-turn LLM interactions [125.21418304558948]
leakage in large language models (LLMs) poses a significant security and privacy threat.
leakage in multi-turn LLM interactions along with mitigation strategies has not been studied in a standardized manner.
This paper investigates LLM vulnerabilities against prompt leakage across 4 diverse domains and 10 closed- and open-source LLMs.
arXiv Detail & Related papers (2024-04-24T23:39:58Z) - Understanding the Effectiveness of Large Language Models in Detecting Security Vulnerabilities [12.82645410161464]
Large Language Models (LLMs) have demonstrated remarkable performance on code-related tasks.
We evaluate whether pre-trained LLMs can detect security vulnerabilities and address the limitations of existing tools.
arXiv Detail & Related papers (2023-11-16T13:17:20Z) - A Wolf in Sheep's Clothing: Generalized Nested Jailbreak Prompts can Fool Large Language Models Easily [51.63085197162279]
Large Language Models (LLMs) are designed to provide useful and safe responses.
adversarial prompts known as 'jailbreaks' can circumvent safeguards.
We propose ReNeLLM, an automatic framework that leverages LLMs themselves to generate effective jailbreak prompts.
arXiv Detail & Related papers (2023-11-14T16:02:16Z) - Can LLMs Patch Security Issues? [1.3299507495084417]
Large Language Models (LLMs) have shown impressive proficiency in code generation.
LLMs share a weakness with their human counterparts: producing code that inadvertently has security vulnerabilities.
We propose Feedback-Driven Security Patching (FDSP), where LLMs automatically refine generated, vulnerable code.
arXiv Detail & Related papers (2023-11-13T08:54:37Z) - SmoothLLM: Defending Large Language Models Against Jailbreaking Attacks [99.23352758320945]
We propose SmoothLLM, the first algorithm designed to mitigate jailbreaking attacks on large language models (LLMs)
Based on our finding that adversarially-generated prompts are brittle to character-level changes, our defense first randomly perturbs multiple copies of a given input prompt, and then aggregates the corresponding predictions to detect adversarial inputs.
arXiv Detail & Related papers (2023-10-05T17:01:53Z) - Red Teaming Language Model Detectors with Language Models [114.36392560711022]
Large language models (LLMs) present significant safety and ethical risks if exploited by malicious users.
Recent works have proposed algorithms to detect LLM-generated text and protect LLMs.
We study two types of attack strategies: 1) replacing certain words in an LLM's output with their synonyms given the context; 2) automatically searching for an instructional prompt to alter the writing style of the generation.
arXiv Detail & Related papers (2023-05-31T10:08:37Z) - LLMDet: A Third Party Large Language Models Generated Text Detection
Tool [119.0952092533317]
Large language models (LLMs) are remarkably close to high-quality human-authored text.
Existing detection tools can only differentiate between machine-generated and human-authored text.
We propose LLMDet, a model-specific, secure, efficient, and extendable detection tool.
arXiv Detail & Related papers (2023-05-24T10:45:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.