CodeWatcher: IDE Telemetry Data Extraction Tool for Understanding Coding Interactions with LLMs
- URL: http://arxiv.org/abs/2510.11536v1
- Date: Mon, 13 Oct 2025 15:39:08 GMT
- Title: CodeWatcher: IDE Telemetry Data Extraction Tool for Understanding Coding Interactions with LLMs
- Authors: Manaal Basha, Aimeê M. Ribeiro, Jeena Javahar, Cleidson R. B. de Souza, Gema Rodríguez-Pérez,
- Abstract summary: textitCodeWatcher is a lightweight, unobtrusive client-server system designed to capture finegrained interaction events from within the Visual Studio Code editor.<n>textitCodeWatcher logs semantically meaningful events such as insertions made by CGTs, deletions, copypaste actions, and focus shifts.
- Score: 2.125886632946383
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Understanding how developers interact with code generation tools (CGTs) requires detailed, real-time data on programming behavior which is often difficult to collect without disrupting workflow. We present \textit{CodeWatcher}, a lightweight, unobtrusive client-server system designed to capture fine-grained interaction events from within the Visual Studio Code (VS Code) editor. \textit{CodeWatcher} logs semantically meaningful events such as insertions made by CGTs, deletions, copy-paste actions, and focus shifts, enabling continuous monitoring of developer activity without modifying user workflows. The system comprises a VS Code plugin, a Python-based RESTful API, and a MongoDB backend, all containerized for scalability and ease of deployment. By structuring and timestamping each event, \textit{CodeWatcher} enables post-hoc reconstruction of coding sessions and facilitates rich behavioral analyses, including how and when CGTs are used during development. This infrastructure is crucial for supporting research on responsible AI, developer productivity, and the human-centered evaluation of CGTs. Please find the demo, diagrams, and tool here: https://osf.io/j2kru/overview.
Related papers
- LogicLens: Leveraging Semantic Code Graph to explore Multi Repository large systems [0.2519906683279152]
We introduce LogicLens, a reactive conversational agent that assists developers in exploring complex software systems.<n>We present the architecture of the system, discuss emergent behaviors, and evaluate its effectiveness on real-world multi-repository scenarios.
arXiv Detail & Related papers (2026-01-15T15:35:23Z) - Empowering smart app development with SolidGPT: an edge-cloud hybrid AI agent framework [0.0]
SolidGPT is an open-source, edge-cloud hybrid developer assistant built on GitHub.<n>It enables developers to: talk to your beginnings: interactively query and project structure.<n>It generates PRDs, task breakdowns, boards, and even web app scaffolds.
arXiv Detail & Related papers (2025-12-09T06:34:28Z) - DeepAgent: A General Reasoning Agent with Scalable Toolsets [111.6384541877723]
DeepAgent is an end-to-end deep reasoning agent that performs autonomous thinking, tool discovery, and action execution.<n>To address the challenges of long-horizon interactions, we introduce an autonomous memory folding mechanism that compresses past interactions into structured episodic, working, and tool memories.<n>We develop an end-to-end reinforcement learning strategy, namely ToolPO, that leverages LLM-simulated APIs and applies tool-call advantage attribution to assign fine-grained credit to the tool invocation tokens.
arXiv Detail & Related papers (2025-10-24T16:24:01Z) - SwingArena: Competitive Programming Arena for Long-context GitHub Issue Solving [90.32201622392137]
We present SwingArena, a competitive evaluation framework for Large Language Models (LLMs)<n>Unlike traditional static benchmarks, SwingArena models the collaborative process of software by pairing LLMs as iterations, who generate patches, and reviewers, who create test cases and verify the patches through continuous integration (CI) pipelines.
arXiv Detail & Related papers (2025-05-29T18:28:02Z) - debug-gym: A Text-Based Environment for Interactive Debugging [55.11603087371956]
Large Language Models (LLMs) are increasingly relied upon for coding tasks.<n>We posit that LLMs can benefit from the ability to interactively explore a to gather the information relevant to their task.<n>We present a textual environment, namely debug-gym, for developing LLM-based agents in an interactive coding setting.
arXiv Detail & Related papers (2025-03-27T14:43:28Z) - Codev-Bench: How Do LLMs Understand Developer-Centric Code Completion? [60.84912551069379]
We present the Code-Development Benchmark (Codev-Bench), a fine-grained, real-world, repository-level, and developer-centric evaluation framework.
Codev-Agent is an agent-based system that automates repository crawling, constructs execution environments, extracts dynamic calling chains from existing unit tests, and generates new test samples to avoid data leakage.
arXiv Detail & Related papers (2024-10-02T09:11:10Z) - CodexGraph: Bridging Large Language Models and Code Repositories via Code Graph Databases [13.733229886643041]
Large Language Models (LLMs) excel in stand-alone code tasks like HumanEval and MBPP, but struggle with handling entire code repositories.
Similarity-based retrieval often has low recall in complex tasks, while manual tools and APIs are typically task-specific and require expert knowledge.
We introduce CodexGraph, a system that integrates LLM agents with graph database interfaces extracted from code repositories.
arXiv Detail & Related papers (2024-08-07T17:13:59Z) - Context Composing for Full Line Code Completion [0.46040036610482665]
The paper describes our approach to context composing for the Transformer model that is a core of the feature's implementation.
We share our next steps to improve the feature and emphasize the importance of several research aspects in the area.
arXiv Detail & Related papers (2024-02-14T15:17:37Z) - Collaborative, Code-Proximal Dynamic Software Visualization within Code
Editors [55.57032418885258]
This paper introduces the design and proof-of-concept implementation for a software visualization approach that can be embedded into code editors.
Our contribution differs from related work in that we use dynamic analysis of a software system's runtime behavior.
Our visualization approach enhances common remote pair programming tools and is collaboratively usable by employing shared code cities.
arXiv Detail & Related papers (2023-08-30T06:35:40Z) - Using an LLM to Help With Code Understanding [13.53616539787915]
Large language models (LLMs) are revolutionizing the process of writing code.
Our plugin queries OpenAI's GPT-3.5-turbo model with four high-level requests without the user having to write explicit prompts.
We evaluate this system in a user study with 32 participants, which confirms that using our plugin can aid task completion more than web search.
arXiv Detail & Related papers (2023-07-17T00:49:06Z) - Guiding Language Models of Code with Global Context using Monitors [17.05416012014561]
Language models of code (LMs) work well when the surrounding code provides sufficient context.
LMs suffer from limited awareness of such global context and end up hallucinating.
We propose monitor-guided decoding (MGD) where a monitor uses static analysis to guide the decoding.
arXiv Detail & Related papers (2023-06-19T08:13:50Z) - ReACC: A Retrieval-Augmented Code Completion Framework [53.49707123661763]
We propose a retrieval-augmented code completion framework, leveraging both lexical copying and referring to code with similar semantics by retrieval.
We evaluate our approach in the code completion task in Python and Java programming languages, achieving a state-of-the-art performance on CodeXGLUE benchmark.
arXiv Detail & Related papers (2022-03-15T08:25:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.