Your Fix Is My Exploit: Enabling Comprehensive DL Library API Fuzzing with Large Language Models
- URL: http://arxiv.org/abs/2501.04312v1
- Date: Wed, 08 Jan 2025 07:07:22 GMT
- Title: Your Fix Is My Exploit: Enabling Comprehensive DL Library API Fuzzing with Large Language Models
- Authors: Kunpeng Zhang, Shuai Wang, Jitao Han, Xiaogang Zhu, Xian Li, Shaohua Wang, Sheng Wen,
- Abstract summary: Deep learning (DL) libraries, widely used in AI applications, often contain vulnerabilities like overflows and use buffer-free errors.
Traditional fuzzing struggles with the complexity and API diversity of DL libraries.
We propose DFUZZ, an LLM-driven fuzzing approach for DL libraries.
- Score: 49.214291813478695
- License:
- Abstract: Deep learning (DL) libraries, widely used in AI applications, often contain vulnerabilities like buffer overflows and use-after-free errors. Traditional fuzzing struggles with the complexity and API diversity of DL libraries such as TensorFlow and PyTorch, which feature over 1,000 APIs. Testing all these APIs is challenging due to complex inputs and varied usage patterns. While large language models (LLMs) show promise in code understanding and generation, existing LLM-based fuzzers lack deep knowledge of API edge cases and struggle with test input generation. To address this, we propose DFUZZ, an LLM-driven fuzzing approach for DL libraries. DFUZZ leverages two insights: (1) LLMs can reason about error-triggering edge cases from API code and apply this knowledge to untested APIs, and (2) LLMs can accurately synthesize test programs to automate API testing. By providing LLMs with a "white-box view" of APIs, DFUZZ enhances reasoning and generation for comprehensive fuzzing. Experimental results show that DFUZZ outperforms state-of-the-art fuzzers in API coverage for TensorFlow and PyTorch, uncovering 37 bugs, with 8 fixed and 19 under developer investigation.
Related papers
- LlamaRestTest: Effective REST API Testing with Small Language Models [50.058600784556816]
We present LlamaRestTest, a novel approach that employs two custom LLMs to generate realistic test inputs.
LlamaRestTest surpasses state-of-the-art tools in code coverage and error detection, even with RESTGPT-enhanced specifications.
arXiv Detail & Related papers (2025-01-15T05:51:20Z) - LLM Based Input Space Partitioning Testing for Library APIs [13.070272424794744]
We present an LLM-based input space partitioning testing approach, LISP, for library API testing.
We evaluate LISP on more than 2,205 library API methods taken from 10 popular open-source Java libraries.
On average, LISP achieves 67.82% branch coverage, surpassing EvoSuite by 1.21 times.
arXiv Detail & Related papers (2024-12-15T17:50:50Z) - Subgraph-Oriented Testing for Deep Learning Libraries [9.78188667672054]
We propose SORT (Subgraph-Oriented Realistic Testing) to test Deep Learning (DL) libraries on different hardware platforms.
SORT takes popular API interaction patterns, represented as frequent subgraphs of model graphs, as test subjects.
SORT achieves a 100% valid input generation rate, detects more precision bugs than existing methods, and reveals interaction-related bugs missed by single-API testing.
arXiv Detail & Related papers (2024-12-09T12:10:48Z) - ExploraCoder: Advancing code generation for multiple unseen APIs via planning and chained exploration [70.26807758443675]
ExploraCoder is a training-free framework that empowers large language models to invoke unseen APIs in code solution.
We show that ExploraCoder significantly improves performance for models lacking prior API knowledge, achieving an absolute increase of 11.24% over niave RAG approaches and 14.07% over pretraining methods in pass@10.
arXiv Detail & Related papers (2024-12-06T19:00:15Z) - Model Equality Testing: Which Model Is This API Serving? [59.005869726179455]
We formalize detecting such distortions as Model Equality Testing, a two-sample testing problem.
A test built on a simple string kernel achieves a median of 77.4% power against a range of distortions.
We then apply this test to commercial inference APIs for four Llama models, finding that 11 out of 31 endpoints serve different distributions than reference weights released by Meta.
arXiv Detail & Related papers (2024-10-26T18:34:53Z) - DLLens: Testing Deep Learning Libraries via LLM-aided Synthesis [8.779035160734523]
Testing is a major approach to ensuring the quality of deep learning (DL) libraries.
Existing testing techniques commonly adopt differential testing to relieve the need for test oracle construction.
This paper introduces thatens, a novel differential testing technique for DL library testing.
arXiv Detail & Related papers (2024-06-12T07:06:38Z) - A Solution-based LLM API-using Methodology for Academic Information Seeking [49.096714812902576]
SoAy is a solution-based LLM API-using methodology for academic information seeking.
It uses code with a solution as the reasoning method, where a solution is a pre-constructed API calling sequence.
Results show a 34.58-75.99% performance improvement compared to state-of-the-art LLM API-based baselines.
arXiv Detail & Related papers (2024-05-24T02:44:14Z) - HOPPER: Interpretative Fuzzing for Libraries [6.36596812288503]
HOPPER can fuzz libraries without requiring any domain knowledge.
It transforms the problem of library fuzzing into the problem of interpreter fuzzing.
arXiv Detail & Related papers (2023-09-07T06:11:18Z) - Private-Library-Oriented Code Generation with Large Language Models [52.73999698194344]
This paper focuses on utilizing large language models (LLMs) for code generation in private libraries.
We propose a novel framework that emulates the process of programmers writing private code.
We create four private library benchmarks, including TorchDataEval, TorchDataComplexEval, MonkeyEval, and BeatNumEval.
arXiv Detail & Related papers (2023-07-28T07:43:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.