Who Do You Think You Are? Creating RSE Personas from GitHub Interactions
- URL: http://arxiv.org/abs/2510.05390v1
- Date: Mon, 06 Oct 2025 21:35:05 GMT
- Title: Who Do You Think You Are? Creating RSE Personas from GitHub Interactions
- Authors: Felicity Anderson, Julien Sindt, Neil Chue Hong,
- Abstract summary: We describe an approach combining software repository mining and data-driven personas applied to research software (RS) development.<n>This allows individuals and RS project teams to understand their contributions, impact and repository dynamics.<n>We demonstrate how the RSE personas method successfully characterises a sample of 115,174 repository contributors across 1,284 RS repositories on GitHub.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We describe data-driven RSE personas: an approach combining software repository mining and data-driven personas applied to research software (RS), an attempt to describe and identify common and rare patterns of Research Software Engineering (RSE) development. This allows individuals and RS project teams to understand their contributions, impact and repository dynamics - an important foundation for improving RSE. We evaluate the method on different patterns of collaborative interaction behaviours by contributors to mid-sized public RS repositories (those with 10-300 committers) on GitHub. We demonstrate how the RSE personas method successfully characterises a sample of 115,174 repository contributors across 1,284 RS repositories on GitHub, sampled from 42,284 candidate software repository records queried from Zenodo. We identify, name and summarise seven distinct personas from low to high interactivity: Ephemeral Contributor; Occasional Contributor; Project Organiser; Moderate Contributor; Low-Process Closer; Low-Coding Closer; and Active Contributor. This demonstrates that large datasets can be analysed despite difficulties of comparing software projects with different project management factors, research domains and contributor backgrounds.
Related papers
- Why Authors and Maintainers Link (or Don't Link) Their PyPI Libraries to Code Repositories and Donation Platforms [83.16077040470975]
Metadata of libraries on the Python Package Index (PyPI) plays a critical role in supporting the transparency, trust, and sustainability of open-source libraries.<n>This paper presents a large-scale empirical study combining two targeted surveys sent to 50,000 PyPI authors and maintainers.<n>We analyze more than 1,400 responses using large language model (LLM)-based topic modeling to uncover key motivations and barriers related to linking repositories and donation platforms.
arXiv Detail & Related papers (2026-01-21T16:13:57Z) - DRBench: A Realistic Benchmark for Enterprise Deep Research [81.49694432639406]
DRBench is a benchmark for evaluating AI agents on complex, open-ended deep research tasks in enterprise settings.<n>We release 15 deep research tasks across 10 domains, such as Sales, Cybersecurity, and Compliance.
arXiv Detail & Related papers (2025-09-30T18:47:20Z) - Benchmarking Deep Search over Heterogeneous Enterprise Data [73.55304268238474]
We present a new benchmark for evaluating a form of retrieval-augmented generation (RAG)<n>RAG requires source-aware, multi-hop reasoning over diverse, sparsed, but related sources.<n>We build it using a synthetic data pipeline that simulates business across product planning, development, and support stages.
arXiv Detail & Related papers (2025-06-29T08:34:59Z) - Mining Software Repositories for Expert Recommendation [3.481985817302898]
We propose an automated approach to bug assignment to developers in large open-source software projects.<n>This way, we assist human bug triagers who are in charge of finding the best developer with the right level of expertise in a particular area to be assigned to a newly reported issue.
arXiv Detail & Related papers (2025-04-23T01:41:08Z) - SnipGen: A Mining Repository Framework for Evaluating LLMs for Code [51.07471575337676]
Language Models (LLMs) are trained on extensive datasets that include code repositories.<n> evaluating their effectiveness poses significant challenges due to the potential overlap between the datasets used for training and those employed for evaluation.<n>We introduce SnipGen, a comprehensive repository mining framework designed to leverage prompt engineering across various downstream tasks for code generation.
arXiv Detail & Related papers (2025-02-10T21:28:15Z) - Ecosystem-wide influences on pull request decisions: insights from NPM [1.7205106391379021]
We collect a dataset of approximately 1.8 million pull requests and 2.1 million issues from 20,052 GitHub projects within the NPM ecosystem.<n>We find that developers with ecosystem experience make different contributions than users without.<n>We find that combining ecosystem-wide factors with features studied in previous work to predict the outcome of pull requests reached an overall F1 score of 0.92.
arXiv Detail & Related papers (2024-10-04T13:14:39Z) - On the Creation of Representative Samples of Software Repositories [1.8599311233727087]
With the emergence of social coding platforms such as GitHub, researchers have now access to millions of software repositories to use as source data for their studies.
Current sampling methods are often based on random selection or rely on variables which may not be related to the research study.
We present a methodology for creating representative samples of software repositories, where such representativeness is properly aligned with both the characteristics of the population of repositories and the requirements of the empirical study.
arXiv Detail & Related papers (2024-10-01T12:41:15Z) - SEART Data Hub: Streamlining Large-Scale Source Code Mining and Pre-Processing [13.717170962455526]
We present the SEART Data Hub, a web application that allows to easily build and pre-process large-scale datasets featuring code mined from public GitHub repositories.
Through a simple web interface, researchers can specify a set of mining criteria as well as specific pre-processing steps they want to perform.
After submitting the request, the user will receive an email with a download link for the required dataset within a few hours.
arXiv Detail & Related papers (2024-09-27T11:42:19Z) - CoIR: A Comprehensive Benchmark for Code Information Retrieval Models [52.61625841028781]
COIR (Code Information Retrieval Benchmark) is a robust and comprehensive benchmark designed to assess code retrieval capabilities.<n>COIR comprises ten meticulously curated code datasets, spanning eight distinctive retrieval tasks across seven diverse domains.<n>We evaluate nine widely used retrieval models using COIR, uncovering significant difficulties in performing code retrieval tasks even with state-of-the-art systems.
arXiv Detail & Related papers (2024-07-03T07:58:20Z) - DiscoveryBench: Towards Data-Driven Discovery with Large Language Models [50.36636396660163]
We present DiscoveryBench, the first comprehensive benchmark that formalizes the multi-step process of data-driven discovery.
Our benchmark contains 264 tasks collected across 6 diverse domains, such as sociology and engineering.
Our benchmark, thus, illustrates the challenges in autonomous data-driven discovery and serves as a valuable resource for the community to make progress.
arXiv Detail & Related papers (2024-07-01T18:58:22Z) - Alibaba LingmaAgent: Improving Automated Issue Resolution via Comprehensive Repository Exploration [64.19431011897515]
This paper presents Alibaba LingmaAgent, a novel Automated Software Engineering method designed to comprehensively understand and utilize whole software repositories for issue resolution.<n>Our approach introduces a top-down method to condense critical repository information into a knowledge graph, reducing complexity, and employs a Monte Carlo tree search based strategy.<n>In production deployment and evaluation at Alibaba Cloud, LingmaAgent automatically resolved 16.9% of in-house issues faced by development engineers, and solved 43.3% of problems after manual intervention.
arXiv Detail & Related papers (2024-06-03T15:20:06Z) - Looking for related discussions on GitHub Discussions [18.688096673390586]
GitHub Discussions is a native forum to facilitate collaborative discussions between users and members of communities hosted on the platform.
As GitHub Discussions resembles PCQA forums, it faces challenges similar to those faced by such environments.
While duplicate posts have the same content - and may be exact copies - near-duplicates share similar topics and information.
We propose an approach based on a Sentence-BERT pre-trained model: the RD-Detector.
arXiv Detail & Related papers (2022-06-23T20:41:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.