Exploring the Impact of Source Code Linearity on the Programmers Comprehension of API Code Examples
- URL: http://arxiv.org/abs/2404.02377v1
- Date: Wed, 3 Apr 2024 00:40:38 GMT
- Title: Exploring the Impact of Source Code Linearity on the Programmers Comprehension of API Code Examples
- Authors: Seham Alharbi, Dimitris Kolovos,
- Abstract summary: We investigated whether the (a) linearity and (b) length of the source code in API code examples affect users performance in terms of correctness and time spent.
We conducted an online controlled code comprehension experiment with 61 Java developers.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Context: Application Programming Interface (API) code examples are an essential knowledge resource for learning APIs. However, a few user studies have explored how the structural characteristics of the source code in code examples impact their comprehensibility and reusability. Objectives: We investigated whether the (a) linearity and (b) length of the source code in API code examples affect users performance in terms of correctness and time spent. We also collected subjective ratings. Methods: We conducted an online controlled code comprehension experiment with 61 Java developers. As a case study, we used the API code examples from the Joda-Time Java library. We had participants perform code comprehension and reuse tasks on variants of the example with different lengths and degrees of linearity. Findings: Participants demonstrated faster reaction times when exposed to linear code examples. However, no substantial differences in correctness or subjective ratings were observed. Implications: Our findings suggest that the linear presentation of a source code may enhance initial example understanding and reusability. This, in turn, may provide API developers with some insights into the effective structuring of their API code examples. However, we highlight the need for further investigation.
Related papers
- A Comprehensive Framework for Evaluating API-oriented Code Generation in Large Language Models [14.665460257371164]
Large language models (LLMs) like GitHub Copilot and ChatGPT have emerged as powerful tools for code generation.
We propose AutoAPIEval, a framework designed to evaluate the capabilities of LLMs in API-oriented code generation.
arXiv Detail & Related papers (2024-09-23T17:22:09Z) - Comments as Natural Logic Pivots: Improve Code Generation via Comment Perspective [85.48043537327258]
We propose MANGO (comMents As Natural loGic pivOts), including a comment contrastive training strategy and a corresponding logical comment decoding strategy.
Results indicate that MANGO significantly improves the code pass rate based on the strong baselines.
The robustness of the logical comment decoding strategy is notably higher than the Chain-of-thoughts prompting.
arXiv Detail & Related papers (2024-04-11T08:30:46Z) - CodeScholar: Growing Idiomatic Code Examples [26.298684667238994]
We present CodeScholar, a tool that generates idiomatic code examples demonstrating the common usage of API methods.
It includes a novel neural-guided search technique over graphs that grows the query APIs into idiomatic code examples.
We show that CodeScholar not only helps developers but also LLM-powered programming assistants generate correct code in a program synthesis setting.
arXiv Detail & Related papers (2023-12-23T04:06:15Z) - Exploring Behaviours of RESTful APIs in an Industrial Setting [0.43012765978447565]
We propose a set of behavioural properties, common to REST APIs, which are used to generate examples of behaviours that these APIs exhibit.
These examples can be used both (i) to further the understanding of the API and (ii) as a source of automatic test cases.
Our approach can generate examples deemed relevant for understanding the system and for a source of test generation by practitioners.
arXiv Detail & Related papers (2023-10-26T11:33:11Z) - Pop Quiz! Do Pre-trained Code Models Possess Knowledge of Correct API
Names? [28.86399157983769]
Recent breakthroughs in pre-trained code models, such as CodeBERT and Codex, have shown their superior performance in various downstream tasks.
Recent studies reveal that even state-of-the-art pre-trained code models struggle with suggesting the correct APIs during code generation.
arXiv Detail & Related papers (2023-09-14T15:46:41Z) - Exploring API Behaviours Through Generated Examples [0.768721532845575]
We present an approach to automatically generate relevant examples of behaviours of an API.
Our method can produce small and relevant examples that can help engineers to understand the system under exploration.
arXiv Detail & Related papers (2023-08-29T11:05:52Z) - Private-Library-Oriented Code Generation with Large Language Models [52.73999698194344]
This paper focuses on utilizing large language models (LLMs) for code generation in private libraries.
We propose a novel framework that emulates the process of programmers writing private code.
We create four private library benchmarks, including TorchDataEval, TorchDataComplexEval, MonkeyEval, and BeatNumEval.
arXiv Detail & Related papers (2023-07-28T07:43:13Z) - Structured Prompting: Scaling In-Context Learning to 1,000 Examples [78.41281805608081]
We introduce structured prompting that breaks the length limit and scales in-context learning to thousands of examples.
Specifically, demonstration examples are separately encoded with well-designed position embeddings, and then they are jointly attended by the test example using a rescaled attention mechanism.
arXiv Detail & Related papers (2022-12-13T16:31:21Z) - Enhancing Semantic Code Search with Multimodal Contrastive Learning and
Soft Data Augmentation [50.14232079160476]
We propose a new approach with multimodal contrastive learning and soft data augmentation for code search.
We conduct extensive experiments to evaluate the effectiveness of our approach on a large-scale dataset with six programming languages.
arXiv Detail & Related papers (2022-04-07T08:49:27Z) - ReACC: A Retrieval-Augmented Code Completion Framework [53.49707123661763]
We propose a retrieval-augmented code completion framework, leveraging both lexical copying and referring to code with similar semantics by retrieval.
We evaluate our approach in the code completion task in Python and Java programming languages, achieving a state-of-the-art performance on CodeXGLUE benchmark.
arXiv Detail & Related papers (2022-03-15T08:25:08Z) - A Transformer-based Approach for Source Code Summarization [86.08359401867577]
We learn code representation for summarization by modeling the pairwise relationship between code tokens.
We show that despite the approach is simple, it outperforms the state-of-the-art techniques by a significant margin.
arXiv Detail & Related papers (2020-05-01T23:29:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.