Operationalizing Large Language Models with Design-Aware Contexts for Code Comment Generation
- URL: http://arxiv.org/abs/2510.22338v1
- Date: Sat, 25 Oct 2025 15:44:22 GMT
- Title: Operationalizing Large Language Models with Design-Aware Contexts for Code Comment Generation
- Authors: Aritra Mitra, Srijoni Majumdar, Anamitra Mukhopadhyay, Partha Pratim Das, Paul D Clough, Partha Pratim Chakrabarti,
- Abstract summary: This study focuses on the feasibility of design documents as a context for the LLMs to generate more useful comments.<n>Design documents are often used by maintainers to understand code when comments do not suffice.
- Score: 6.19791892410626
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Comments are very useful to the flow of code development. With the increasing commonality of code, novice coders have been creating a significant amount of codebases. Due to lack of commenting standards, their comments are often useless, and increase the time taken to further maintain codes. This study intends to find the usefulness of large language models (LLMs) in these cases to generate potentially better comments. This study focuses on the feasibility of design documents as a context for the LLMs to generate more useful comments, as design documents are often used by maintainers to understand code when comments do not suffice.
Related papers
- Guidelines to Prompt Large Language Models for Code Generation: An Empirical Characterization [82.29178197694819]
We derive and evaluate development-specific prompt optimization guidelines.<n>We use an iterative, test-driven approach to automatically refine code generation prompts.<n>We conduct an assessment with 50 practitioners, who report their usage of the elicited prompt improvement patterns.
arXiv Detail & Related papers (2026-01-19T15:01:42Z) - Optimizing Knowledge Utilization for Multi-Intent Comment Generation with Large Language Models [12.546021344852305]
Code comment generation aims to produce a generic overview of a code snippet, helping developers understand and maintain code.<n>This highlights the necessity of multi-intent comment generation.<n>We propose a framework named KUMIC for multi-intent comment generation.
arXiv Detail & Related papers (2025-10-29T06:02:23Z) - IFEvalCode: Controlled Code Generation [69.28317223249358]
The paper introduces forward and backward constraints generation to improve the instruction-following capabilities of Code LLMs.<n>The authors present IFEvalCode, a multilingual benchmark comprising 1.6K test samples across seven programming languages.
arXiv Detail & Related papers (2025-07-30T08:08:48Z) - Assessing Consensus of Developers' Views on Code Readability [3.798885293742468]
Developers now spend more time reviewing code than writing it, highlighting the importance of Code Readability for code comprehension.
Previous research found that existing Code Readability models were inaccurate in representing developers' notions.
We surveyed 10 Java developers with similar coding experience to evaluate their consensus on Code Readability assessments and related aspects.
arXiv Detail & Related papers (2024-07-04T09:54:42Z) - Comments as Natural Logic Pivots: Improve Code Generation via Comment Perspective [85.48043537327258]
We propose MANGO (comMents As Natural loGic pivOts), including a comment contrastive training strategy and a corresponding logical comment decoding strategy.
Results indicate that MANGO significantly improves the code pass rate based on the strong baselines.
The robustness of the logical comment decoding strategy is notably higher than the Chain-of-thoughts prompting.
arXiv Detail & Related papers (2024-04-11T08:30:46Z) - Can ChatGPT Support Developers? An Empirical Evaluation of Large Language Models for Code Generation [2.93322471069531]
We conduct an empirical analysis of conversations in DevGPT, a dataset collected from developers' conversations with ChatGPT.
Our findings indicate that the current practice of using LLM-generated code is typically limited to either demonstrating high-level concepts or providing examples in documentation.
arXiv Detail & Related papers (2024-02-18T20:48:09Z) - Large Language Models are Few-Shot Summarizers: Multi-Intent Comment
Generation via In-Context Learning [34.006227676170504]
This study investigates the feasibility of utilizing large language models (LLMs) to generate comments that can fulfill developers' diverse intents.
Experiments on two large-scale datasets demonstrate the rationale of our insights.
arXiv Detail & Related papers (2023-04-22T12:26:24Z) - ReACC: A Retrieval-Augmented Code Completion Framework [53.49707123661763]
We propose a retrieval-augmented code completion framework, leveraging both lexical copying and referring to code with similar semantics by retrieval.
We evaluate our approach in the code completion task in Python and Java programming languages, achieving a state-of-the-art performance on CodeXGLUE benchmark.
arXiv Detail & Related papers (2022-03-15T08:25:08Z) - CodeRetriever: Unimodal and Bimodal Contrastive Learning [128.06072658302165]
We propose the CodeRetriever model, which combines the unimodal and bimodal contrastive learning to train function-level code semantic representations.
For unimodal contrastive learning, we design a semantic-guided method to build positive code pairs based on the documentation and function name.
For bimodal contrastive learning, we leverage the documentation and in-line comments of code to build text-code pairs.
arXiv Detail & Related papers (2022-01-26T10:54:30Z) - Deep Just-In-Time Inconsistency Detection Between Comments and Source
Code [51.00904399653609]
In this paper, we aim to detect whether a comment becomes inconsistent as a result of changes to the corresponding body of code.
We develop a deep-learning approach that learns to correlate a comment with code changes.
We show the usefulness of our approach by combining it with a comment update model to build a more comprehensive automatic comment maintenance system.
arXiv Detail & Related papers (2020-10-04T16:49:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.