XAMT: Cross-Framework API Matching for Testing Deep Learning Libraries
- URL: http://arxiv.org/abs/2508.12546v1
- Date: Mon, 18 Aug 2025 00:48:50 GMT
- Title: XAMT: Cross-Framework API Matching for Testing Deep Learning Libraries
- Authors: Bin Duan, Ruican Dong, Naipeng Dong, Dan Dongseong Kim, Guowei Yang,
- Abstract summary: We propose XAMT, a cross-framework fuzzing method for testing deep learning libraries.<n>XAMT matches APIs using similarity-based rules based on names, descriptions, and parameter structures.<n>It then aligns inputs and applies variance-guided testing to detect bugs.
- Score: 6.00258665435786
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning powers critical applications such as autonomous driving, healthcare, and finance, where the correctness of underlying libraries is essential. Bugs in widely used deep learning APIs can propagate to downstream systems, causing serious consequences. While existing fuzzing techniques detect bugs through intra-framework testing across hardware backends (CPU vs. GPU), they may miss bugs that manifest identically across backends and thus escape detection under these strategies. To address this problem, we propose XAMT, a cross-framework fuzzing method that tests deep learning libraries by matching and comparing functionally equivalent APIs across different frameworks. XAMT matches APIs using similarity-based rules based on names, descriptions, and parameter structures. It then aligns inputs and applies variance-guided differential testing to detect bugs. We evaluated XAMT on five popular frameworks, including PyTorch, TensorFlow, Keras, Chainer, and JAX. XAMT matched 839 APIs and identified 238 matched API groups, and detected 17 bugs, 12 of which have been confirmed. Our results show that XAMT uncovers bugs undetectable by intra-framework testing, especially those that manifest consistently across backends. XAMT offers a complementary approach to existing methods and offers a new perspective on the testing of deep learning libraries.
Related papers
- Test Amplification for REST APIs via Single and Multi-Agent LLM Systems [1.6499388997661122]
We investigate the use of large language model (LLM) systems, both single-agent and multi-agent setups, for amplifying existing REST API test suites.<n>We present a comparative evaluation of the two approaches across several dimensions, including test coverage, bug detection effectiveness, and practical considerations such as computational cost and energy usage.
arXiv Detail & Related papers (2025-04-10T20:19:50Z) - LlamaRestTest: Effective REST API Testing with Small Language Models [50.058600784556816]
We present LlamaRestTest, a novel approach that employs two custom Large Language Models (LLMs) to generate realistic test inputs.<n>We evaluate it against several state-of-the-art REST API testing tools, including RESTGPT, a GPT-powered specification-enhancement tool.<n>Our study shows that small language models can perform as well as, or better than, large language models in REST API testing.
arXiv Detail & Related papers (2025-01-15T05:51:20Z) - Your Fix Is My Exploit: Enabling Comprehensive DL Library API Fuzzing with Large Language Models [49.214291813478695]
Deep learning (DL) libraries, widely used in AI applications, often contain vulnerabilities like overflows and use buffer-free errors.<n>Traditional fuzzing struggles with the complexity and API diversity of DL libraries.<n>We propose DFUZZ, an LLM-driven fuzzing approach for DL libraries.
arXiv Detail & Related papers (2025-01-08T07:07:22Z) - Subgraph-Oriented Testing for Deep Learning Libraries [9.78188667672054]
We propose SORT (Subgraph-Oriented Realistic Testing) to test Deep Learning (DL) libraries on different hardware platforms.<n>SORT takes popular API interaction patterns, represented as frequent subgraphs of model graphs, as test subjects.<n>SORT achieves a 100% valid input generation rate, detects more precision bugs than existing methods, and reveals interaction-related bugs missed by single-API testing.
arXiv Detail & Related papers (2024-12-09T12:10:48Z) - ExploraCoder: Advancing code generation for multiple unseen APIs via planning and chained exploration [70.26807758443675]
ExploraCoder is a training-free framework that empowers large language models to invoke unseen APIs in code solution.<n> Experimental results demonstrate that ExploraCoder significantly improves performance for models lacking prior API knowledge.
arXiv Detail & Related papers (2024-12-06T19:00:15Z) - A Multi-Agent Approach for REST API Testing with Semantic Graphs and LLM-Driven Inputs [46.65963514391019]
We present AutoRestTest, the first black-box tool to adopt a dependency-embedded multi-agent approach for REST API testing.<n>Our approach treats REST API testing as a separable problem, where four agents collaborate to optimize API exploration.<n>Our evaluation of AutoRestTest on 12 real-world REST services shows that it outperforms the four leading black-box REST API testing tools.
arXiv Detail & Related papers (2024-11-11T16:20:27Z) - Model Equality Testing: Which Model Is This API Serving? [59.005869726179455]
API providers may quantize, watermark, or finetune the underlying model, changing the output distribution.<n>We formalize detecting such distortions by Model Equality Testing, a two-sample testing problem.<n>A test built on a simple string kernel achieves a median of 77.4% power against a range of distortions.
arXiv Detail & Related papers (2024-10-26T18:34:53Z) - Reinforcement Learning-Based REST API Testing with Multi-Coverage [4.127886193201882]
MUCOREST is a novel Reinforcement Learning (RL)-based API testing approach that leverages Q-learning to maximize code coverage and output coverage.
MUCOREST significantly outperforms state-of-the-art API testing approaches by 11.6-261.1% in the number of discovered API bugs.
arXiv Detail & Related papers (2024-10-20T14:20:23Z) - DeepREST: Automated Test Case Generation for REST APIs Exploiting Deep Reinforcement Learning [5.756036843502232]
This paper introduces DeepREST, a novel black-box approach for automatically testing REST APIs.
It leverages deep reinforcement learning to uncover implicit API constraints, that is, constraints hidden from API documentation.
Our empirical validation suggests that the proposed approach is very effective in achieving high test coverage and fault detection.
arXiv Detail & Related papers (2024-08-16T08:03:55Z) - Enhancing Differential Testing With LLMs For Testing Deep Learning Libraries [8.779035160734523]
This paper introduces an LLM-enhanced differential testing technique for DL libraries.<n>It addresses the challenges of finding alternative implementations for a given API and generating diverse test inputs.<n>It synthesizes counterparts for 1.84 times as many APIs as those found by state-of-the-art techniques.
arXiv Detail & Related papers (2024-06-12T07:06:38Z) - SequeL: A Continual Learning Library in PyTorch and JAX [50.33956216274694]
SequeL is a library for Continual Learning that supports both PyTorch and JAX frameworks.
It provides a unified interface for a wide range of Continual Learning algorithms, including regularization-based approaches, replay-based approaches, and hybrid approaches.
We release SequeL as an open-source library, enabling researchers and developers to easily experiment and extend the library for their own purposes.
arXiv Detail & Related papers (2023-04-21T10:00:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.