AI-Powered Commit Explorer (APCE)
- URL: http://arxiv.org/abs/2507.16063v1
- Date: Mon, 21 Jul 2025 20:58:56 GMT
- Title: AI-Powered Commit Explorer (APCE)
- Authors: Yousab Grees, Polina Iaremchuk, Ramtin Ehsani, Esteban Parra, Preetha Chatterjee, Sonia Haiduc,
- Abstract summary: Large Language Model (LLM) generated commit messages have emerged as a way to mitigate this issue.<n>We introduce the AI-Powered Commit Explorer (APCE), a tool to support developers and researchers in the use and study of LLM-generated commit messages.
- Score: 4.651023094799028
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Commit messages in a version control system provide valuable information for developers regarding code changes in software systems. Commit messages can be the only source of information left for future developers describing what was changed and why. However, writing high-quality commit messages is often neglected in practice. Large Language Model (LLM) generated commit messages have emerged as a way to mitigate this issue. We introduce the AI-Powered Commit Explorer (APCE), a tool to support developers and researchers in the use and study of LLM-generated commit messages. APCE gives researchers the option to store different prompts for LLMs and provides an additional evaluation prompt that can further enhance the commit message provided by LLMs. APCE also provides researchers with a straightforward mechanism for automated and human evaluation of LLM-generated messages. Demo link https://youtu.be/zYrJ9s6sZvo
Related papers
- Evaluating Generated Commit Messages with Large Language Models [10.048749643042491]
Commit messages are essential in software development as they serve to document and explain code changes.<n>This study investigates the potential of Large Language Models (LLMs) as automated evaluators for commit message quality.
arXiv Detail & Related papers (2025-07-15T01:50:20Z) - An Empirical Study on Commit Message Generation using LLMs via In-Context Learning [26.39743339039473]
Commit messages concisely describe code changes in natural language.<n>We propose to borrow the weapon of large language models (LLMs) and in-context learning (ICL) to generate commit messages.
arXiv Detail & Related papers (2025-02-26T07:47:52Z) - Context Conquers Parameters: Outperforming Proprietary LLM in Commit Message Generation [4.400274233826898]
Open-source Large Language Models can generate commit messages comparable to those produced by OMG.
We propose lOcal MessagE GenerAtor, a CMG approach that uses a 4-bit quantized 8B open-source LLM.
arXiv Detail & Related papers (2024-08-05T14:26:41Z) - Get my drift? Catching LLM Task Drift with Activation Deltas [55.75645403965326]
Task drift allows attackers to exfiltrate data or influence the LLM's output for other users.<n>We show that a simple linear classifier can detect drift with near-perfect ROC AUC on an out-of-distribution test set.<n>We observe that this approach generalizes surprisingly well to unseen task domains, such as prompt injections, jailbreaks, and malicious instructions.
arXiv Detail & Related papers (2024-06-02T16:53:21Z) - Commit Messages in the Age of Large Language Models [0.9217021281095906]
We evaluate the performance of OpenAI's ChatGPT for generating commit messages based on code changes.
We compare the results obtained with ChatGPT to previous automatic commit message generation methods that have been trained specifically on commit data.
arXiv Detail & Related papers (2024-01-31T06:47:12Z) - Using Large Language Models for Commit Message Generation: A Preliminary
Study [5.5784148764236114]
Large language models (LLMs) can be used to generate commit messages automatically and effectively.
In 78% of the 366 samples, the commit messages generated by LLMs were evaluated by humans as the best.
arXiv Detail & Related papers (2024-01-11T14:06:39Z) - Prompt Highlighter: Interactive Control for Multi-Modal LLMs [50.830448437285355]
This study targets a critical aspect of multi-modal LLMs' (LLMs&VLMs) inference: explicit controllable text generation.
We introduce a novel inference method, Prompt Highlighter, which enables users to highlight specific prompt spans to interactively control the focus during generation.
We find that, during inference, guiding the models with highlighted tokens through the attention weights leads to more desired outputs.
arXiv Detail & Related papers (2023-12-07T13:53:29Z) - LM-Polygraph: Uncertainty Estimation for Language Models [71.21409522341482]
Uncertainty estimation (UE) methods are one path to safer, more responsible, and more effective use of large language models (LLMs)
We introduce LM-Polygraph, a framework with implementations of a battery of state-of-the-art UE methods for LLMs in text generation tasks, with unified program interfaces in Python.
It introduces an extendable benchmark for consistent evaluation of UE techniques by researchers, and a demo web application that enriches the standard chat dialog with confidence scores.
arXiv Detail & Related papers (2023-11-13T15:08:59Z) - Guiding LLM to Fool Itself: Automatically Manipulating Machine Reading
Comprehension Shortcut Triggers [76.77077447576679]
Shortcuts, mechanisms triggered by features spuriously correlated to the true label, has emerged as a potential threat to Machine Reading (MRC) systems.
We introduce a framework that guides an editor to add potential shortcuts-triggers to samples.
Using GPT4 as the editor, we find it can successfully edit trigger shortcut in samples that fool LLMs.
arXiv Detail & Related papers (2023-10-24T12:37:06Z) - Check Your Facts and Try Again: Improving Large Language Models with
External Knowledge and Automated Feedback [127.75419038610455]
Large language models (LLMs) are able to generate human-like, fluent responses for many downstream tasks.
This paper proposes a LLM-Augmenter system, which augments a black-box LLM with a set of plug-and-play modules.
arXiv Detail & Related papers (2023-02-24T18:48:43Z) - Attributed Question Answering: Evaluation and Modeling for Attributed
Large Language Models [68.37431984231338]
Large language models (LLMs) have shown impressive results across a variety of tasks while requiring little or no direct supervision.
We believe the ability of an LLM to an attribute to the text that it generates is likely to be crucial for both system developers and users in this setting.
arXiv Detail & Related papers (2022-12-15T18:45:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.