The Hitchhiker's Guide to Program Analysis: A Journey with Large
Language Models
- URL: http://arxiv.org/abs/2308.00245v3
- Date: Wed, 15 Nov 2023 21:02:47 GMT
- Title: The Hitchhiker's Guide to Program Analysis: A Journey with Large
Language Models
- Authors: Haonan Li, Yu Hao, Yizhuo Zhai, Zhiyun Qian
- Abstract summary: Large Language Models (LLMs) offer a promising alternative to static analysis.
In this paper, we take a deep dive into the open space of LLM-assisted static analysis.
We develop LLift, a fully automated framework that interfaces with both a static analysis tool and an LLM.
- Score: 18.026567399243
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Static analysis is a widely used technique in software engineering for
identifying and mitigating bugs. However, a significant hurdle lies in
achieving a delicate balance between precision and scalability. Large Language
Models (LLMs) offer a promising alternative, as recent advances demonstrate
remarkable capabilities in comprehending, generating, and even debugging code.
Yet, the logic of bugs can be complex and require sophisticated reasoning and a
large analysis scope spanning multiple functions. Therefore, at this point,
LLMs are better used in an assistive role to complement static analysis. In
this paper, we take a deep dive into the open space of LLM-assisted static
analysis, using use-before-initialization (UBI) bugs as a case study. To this
end, we develop LLift, a fully automated framework that interfaces with both a
static analysis tool and an LLM. By carefully designing the framework and the
prompts, we are able to overcome a number of challenges, including bug-specific
modeling, the large problem scope, the non-deterministic nature of LLMs, etc.
Tested in a real-world scenario analyzing nearly a thousand potential UBI bugs
produced by static analysis, LLift demonstrates a potent capability, showcasing
a reasonable precision (50%) and appearing to have no missing bugs. It even
identified 13 previously unknown UBI bugs in the Linux kernel. This research
paves the way for new opportunities and methodologies in using LLMs for bug
discovery in extensive, real-world datasets.
Related papers
- Are Large Language Models Memorizing Bug Benchmarks? [6.640077652362016]
Large Language Models (LLMs) have become integral to various software engineering tasks, including code generation, bug detection, and repair.
A growing concern within the software engineering community is that benchmarks may not reliably reflect true LLM performance due to the risk of data leakage.
We systematically evaluate popular LLMs to assess their susceptibility to data leakage from widely used bug benchmarks.
arXiv Detail & Related papers (2024-11-20T13:46:04Z) - What's Wrong with Your Code Generated by Large Language Models? An Extensive Study [80.18342600996601]
Large language models (LLMs) produce code that is shorter yet more complicated as compared to canonical solutions.
We develop a taxonomy of bugs for incorrect codes that includes three categories and 12 sub-categories, and analyze the root cause for common bug types.
We propose a novel training-free iterative method that introduces self-critique, enabling LLMs to critique and correct their generated code based on bug types and compiler feedback.
arXiv Detail & Related papers (2024-07-08T17:27:17Z) - Advancing Anomaly Detection: Non-Semantic Financial Data Encoding with LLMs [49.57641083688934]
We introduce a novel approach to anomaly detection in financial data using Large Language Models (LLMs) embeddings.
Our experiments demonstrate that LLMs contribute valuable information to anomaly detection as our models outperform the baselines.
arXiv Detail & Related papers (2024-06-05T20:19:09Z) - WitheredLeaf: Finding Entity-Inconsistency Bugs with LLMs [22.22945885085009]
Entity-Inconsistency Bugs (EIBs) originate from semantic bugs.
EIBs are subtle and can remain undetected for years.
We introduce a novel, cascaded EIB detection system named WitheredLeaf.
arXiv Detail & Related papers (2024-05-02T18:44:34Z) - The Emergence of Large Language Models in Static Analysis: A First Look
through Micro-Benchmarks [3.848607479075651]
We investigate the role that current Large Language Models (LLMs) can play in improving callgraph analysis and type inference for Python programs.
Our study reveals that LLMs show promising results in type inference, demonstrating higher accuracy than traditional methods, yet they exhibit limitations in callgraph analysis.
arXiv Detail & Related papers (2024-02-27T16:53:53Z) - LLM Inference Unveiled: Survey and Roofline Model Insights [62.92811060490876]
Large Language Model (LLM) inference is rapidly evolving, presenting a unique blend of opportunities and challenges.
Our survey stands out from traditional literature reviews by not only summarizing the current state of research but also by introducing a framework based on roofline model.
This framework identifies the bottlenecks when deploying LLMs on hardware devices and provides a clear understanding of practical problems.
arXiv Detail & Related papers (2024-02-26T07:33:05Z) - LLMDFA: Analyzing Dataflow in Code with Large Language Models [8.92611389987991]
This paper presents LLMDFA, a compilation-free and customizable dataflow analysis framework.
We decompose the problem into several subtasks and introduce a series of novel strategies.
On average, LLMDFA achieves 87.10% precision and 80.77% recall, surpassing existing techniques with F1 score improvements of up to 0.35.
arXiv Detail & Related papers (2024-02-16T15:21:35Z) - E&V: Prompting Large Language Models to Perform Static Analysis by
Pseudo-code Execution and Verification [7.745665775992235]
Large Language Models (LLMs) offer new capabilities for software engineering tasks.
LLMs simulate the execution of pseudo-code, effectively conducting static analysis encoded in the pseudo-code with minimal human effort.
E&V includes a verification process for pseudo-code execution without needing an external oracle.
arXiv Detail & Related papers (2023-12-13T19:31:00Z) - Sentiment Analysis in the Era of Large Language Models: A Reality Check [69.97942065617664]
This paper investigates the capabilities of large language models (LLMs) in performing various sentiment analysis tasks.
We evaluate performance across 13 tasks on 26 datasets and compare the results against small language models (SLMs) trained on domain-specific datasets.
arXiv Detail & Related papers (2023-05-24T10:45:25Z) - Interpretability at Scale: Identifying Causal Mechanisms in Alpaca [62.65877150123775]
We use Boundless DAS to efficiently search for interpretable causal structure in large language models while they follow instructions.
Our findings mark a first step toward faithfully understanding the inner-workings of our ever-growing and most widely deployed language models.
arXiv Detail & Related papers (2023-05-15T17:15:40Z) - D2A: A Dataset Built for AI-Based Vulnerability Detection Methods Using
Differential Analysis [55.15995704119158]
We propose D2A, a differential analysis based approach to label issues reported by static analysis tools.
We use D2A to generate a large labeled dataset to train models for vulnerability identification.
arXiv Detail & Related papers (2021-02-16T07:46:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.