Understanding API Usage and Testing: An Empirical Study of C Libraries
- URL: http://arxiv.org/abs/2506.11598v1
- Date: Fri, 13 Jun 2025 09:07:16 GMT
- Title: Understanding API Usage and Testing: An Empirical Study of C Libraries
- Authors: Ahmed Zaki, Cristian Cadar,
- Abstract summary: This study is the first to compare API usage and API testing at scale for the C/C++ ecosystem.<n>For our empirical study, we have developed LibProbe, a framework that can be used to analyse a large corpus of clients for a given library.
- Score: 0.2532202013576546
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: For library developers, understanding how their Application Programming Interfaces (APIs) are used in the field can be invaluable. Knowing how clients are using their APIs allows for data-driven decisions on prioritising bug reports, feature requests, and testing activities. For example, the priority of a bug report concerning an API can be partly determined by how widely that API is used. In this paper, we present an empirical study in which we analyse API usage across 21 popular open-source C libraries, such as OpenSSL and SQLite, with a combined total of 3,061 C/C++ clients. We compare API usage by clients with how well library test suites exercise the APIs to offer actionable insights for library developers. To our knowledge, this is the first study that compares API usage and API testing at scale for the C/C++ ecosystem. Our study shows that library developers do not prioritise their effort based on how clients use their API, with popular APIs often poorly tested. For example, in LMDB, a popular key-value store, 45% of the APIs are used by clients but not tested by the library test suite. We further show that client test suites can be leveraged to improve library testing e.g., improving coverage in LMDB by 14.7% with the important advantage that those tests are representative of how the APIs are used in the field. For our empirical study, we have developed LibProbe, a framework that can be used to analyse a large corpus of clients for a given library and produce various metrics useful to library developers.
Related papers
- Your Fix Is My Exploit: Enabling Comprehensive DL Library API Fuzzing with Large Language Models [49.214291813478695]
Deep learning (DL) libraries, widely used in AI applications, often contain vulnerabilities like overflows and use buffer-free errors.<n>Traditional fuzzing struggles with the complexity and API diversity of DL libraries.<n>We propose DFUZZ, an LLM-driven fuzzing approach for DL libraries.
arXiv Detail & Related papers (2025-01-08T07:07:22Z) - ExploraCoder: Advancing code generation for multiple unseen APIs via planning and chained exploration [70.26807758443675]
ExploraCoder is a training-free framework that empowers large language models to invoke unseen APIs in code solution.<n>We show that ExploraCoder significantly improves performance for models lacking prior API knowledge, achieving an absolute increase of 11.24% over niave RAG approaches and 14.07% over pretraining methods in pass@10.
arXiv Detail & Related papers (2024-12-06T19:00:15Z) - A Systematic Evaluation of Large Code Models in API Suggestion: When, Which, and How [53.65636914757381]
API suggestion is a critical task in modern software development.
Recent advancements in large code models (LCMs) have shown promise in the API suggestion task.
arXiv Detail & Related papers (2024-09-20T03:12:35Z) - An Empirical Study of API Misuses of Data-Centric Libraries [9.667988837321943]
This paper contributes an empirical study of API misuses of five data-centric libraries that cover areas such as data processing, numerical computation, machine learning, and visualization.
We identify misuses of these libraries by analyzing data from both Stack Overflow and GitHub.
arXiv Detail & Related papers (2024-08-28T15:15:52Z) - A Solution-based LLM API-using Methodology for Academic Information Seeking [49.096714812902576]
SoAy is a solution-based LLM API-using methodology for academic information seeking.
It uses code with a solution as the reasoning method, where a solution is a pre-constructed API calling sequence.
Results show a 34.58-75.99% performance improvement compared to state-of-the-art LLM API-based baselines.
arXiv Detail & Related papers (2024-05-24T02:44:14Z) - Lightweight Syntactic API Usage Analysis with UCov [0.0]
We present a novel conceptual framework designed to assist library maintainers in understanding the interactions allowed by their APIs.
These customizable models enable library maintainers to improve their design ahead of release, reducing friction during evolution.
We implement these models for Java libraries in a new tool UCov and demonstrate its capabilities on three libraries exhibiting diverse styles of interaction.
arXiv Detail & Related papers (2024-02-19T10:33:41Z) - Private-Library-Oriented Code Generation with Large Language Models [52.73999698194344]
This paper focuses on utilizing large language models (LLMs) for code generation in private libraries.
We propose a novel framework that emulates the process of programmers writing private code.
We create four private library benchmarks, including TorchDataEval, TorchDataComplexEval, MonkeyEval, and BeatNumEval.
arXiv Detail & Related papers (2023-07-28T07:43:13Z) - Automatic Unit Test Generation for Deep Learning Frameworks based on API
Knowledge [11.523398693942413]
We propose MUTester to generate unit test cases for APIs of deep learning frameworks.
We first propose a set of 18 rules for mining API constraints from the API documents.
We then use the frequent itemset mining technique to mine the API usage patterns from a large corpus of machine learning API related code fragments.
arXiv Detail & Related papers (2023-07-01T18:34:56Z) - Carving UI Tests to Generate API Tests and API Specification [8.743426215048451]
API-level testing can play an important role, in-between unit-level testing and UI-level (or end-to-end) testing.
Existing API testing tools require API specifications, which often may not be available or, when available, be inconsistent with the API implementation.
We present an approach that leverages UI testing to enable API-level testing for web applications.
arXiv Detail & Related papers (2023-05-24T03:53:34Z) - Evaluating Embedding APIs for Information Retrieval [51.24236853841468]
We evaluate the capabilities of existing semantic embedding APIs on domain generalization and multilingual retrieval.
We find that re-ranking BM25 results using the APIs is a budget-friendly approach and is most effective in English.
For non-English retrieval, re-ranking still improves the results, but a hybrid model with BM25 works best, albeit at a higher cost.
arXiv Detail & Related papers (2023-05-10T16:40:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.