Examining Ownership Models in Software Teams: A Systematic Literature Review and a Replication Study
- URL: http://arxiv.org/abs/2405.15665v1
- Date: Fri, 24 May 2024 16:03:22 GMT
- Title: Examining Ownership Models in Software Teams: A Systematic Literature Review and a Replication Study
- Authors: Umme Ayman Koana, Quang Hy Le, Shadikur Rahman, Chris Carlson, Francis Chew, Maleknaz Nayebi,
- Abstract summary: We identify 79 relevant papers published between 2005 and 2022.
We develop a taxonomy of ownership artifacts based on type, owners, and degree of ownership.
- Score: 2.0891120283967264
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Effective ownership of software artifacts, particularly code, is crucial for accountability, knowledge sharing, and code quality enhancement. Researchers have proposed models linking ownership of software artifacts with developer performance and code quality. Our study aims to systematically examine various ownership models and provide a structured literature overview. Conducting a systematic literature review, we identified 79 relevant papers published between 2005 and 2022. We developed a taxonomy of ownership artifacts based on type, owners, and degree of ownership, along with compiling modeling variables and analytics types used in each study. Additionally, we assessed the replication status of each study. As a result, we identified nine distinct software artifacts whose ownership has been discussed in the literature, with "Code" being the most frequently analyzed artifact. We found that only three papers (3.79%) provided code and data, whereas nine papers (11.4%) provided only data. Using our systematic literature review results, we replicated experiments on nine priority projects at \texttt{Brightsquid}. The company aimed to compare its code quality against ownership factors in other teams, so we conducted a replication study using their data. Unlike prior studies, we found no strong correlation between minor contributors and bug numbers. Surprisingly, we found no strong link between the total number of developers modifying a file and bug counts, contrasting previous findings. However, we observed a significant correlation between major contributors and bug counts, diverging from earlier research.
Related papers
- Are Large Language Models Good Classifiers? A Study on Edit Intent Classification in Scientific Document Revisions [62.12545440385489]
Large language models (LLMs) have brought substantial advancements in text generation, but their potential for enhancing classification tasks remains underexplored.
We propose a framework for thoroughly investigating fine-tuning LLMs for classification, including both generation- and encoding-based approaches.
We instantiate this framework in edit intent classification (EIC), a challenging and underexplored classification task.
arXiv Detail & Related papers (2024-10-02T20:48:28Z) - Code Ownership: The Principles, Differences, and Their Associations with Software Quality [6.123324869194196]
We investigate the differences in the commonly used ownership approximations in terms of the set of developers, the approximated code ownership values, and the expertise level.
We find that commit-based and line-based ownership approximations produce different sets of developers, different code ownership values, and different sets of major developers.
arXiv Detail & Related papers (2024-08-23T03:01:59Z) - AuthAttLyzer-V2: Unveiling Code Authorship Attribution using Enhanced Ensemble Learning Models & Generating Benchmark Dataset [0.0]
Source Code Authorship Attribution (SCAA) is crucial for software classification because it provides insights into the origin and behavior of software.
This paper presents AuthAttLyzer-V2, a new source code feature extractor for SCAA, focusing on lexical, semantic, syntactic, and N-gram features.
arXiv Detail & Related papers (2024-06-28T13:04:16Z) - Chain-of-Factors Paper-Reviewer Matching [32.86512592730291]
We propose a unified model for paper-reviewer matching that jointly considers semantic, topic, and citation factors.
We demonstrate the effectiveness of our proposed Chain-of-Factors model in comparison with state-of-the-art paper-reviewer matching methods and scientific pre-trained language models.
arXiv Detail & Related papers (2023-10-23T01:29:18Z) - Analyzing Dataset Annotation Quality Management in the Wild [63.07224587146207]
Even popular datasets used to train and evaluate state-of-the-art models contain a non-negligible amount of erroneous annotations, biases, or artifacts.
While practices and guidelines regarding dataset creation projects exist, large-scale analysis has yet to be performed on how quality management is conducted.
arXiv Detail & Related papers (2023-07-16T21:22:40Z) - Same or Different? Diff-Vectors for Authorship Analysis [78.83284164605473]
In classic'' authorship analysis a feature vector represents a document, the value of a feature represents (an increasing function of) the relative frequency of the feature in the document, and the class label represents the author of the document.
Our experiments tackle same-author verification, authorship verification, and closed-set authorship attribution; while DVs are naturally geared for solving the 1st, we also provide two novel methods for solving the 2nd and 3rd.
arXiv Detail & Related papers (2023-01-24T08:48:12Z) - Cracking Double-Blind Review: Authorship Attribution with Deep Learning [43.483063713471935]
We propose a transformer-based, neural-network architecture to attribute an anonymous manuscript to an author.
We leverage all research papers publicly available on arXiv amounting to over 2 million manuscripts.
Our method achieves an unprecedented authorship attribution accuracy, where up to 73% of papers are attributed correctly.
arXiv Detail & Related papers (2022-11-14T15:50:24Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - Assaying Out-Of-Distribution Generalization in Transfer Learning [103.57862972967273]
We take a unified view of previous work, highlighting message discrepancies that we address empirically.
We fine-tune over 31k networks, from nine different architectures in the many- and few-shot setting.
arXiv Detail & Related papers (2022-07-19T12:52:33Z) - What's New? Summarizing Contributions in Scientific Literature [85.95906677964815]
We introduce a new task of disentangled paper summarization, which seeks to generate separate summaries for the paper contributions and the context of the work.
We extend the S2ORC corpus of academic articles by adding disentangled "contribution" and "context" reference labels.
We propose a comprehensive automatic evaluation protocol which reports the relevance, novelty, and disentanglement of generated outputs.
arXiv Detail & Related papers (2020-11-06T02:23:01Z) - CORAL: COde RepresentAtion Learning with Weakly-Supervised Transformers
for Analyzing Data Analysis [33.190021245507445]
Large scale analysis of source code, and in particular scientific source code, holds the promise of better understanding the data science process.
We propose a novel weakly supervised transformer-based architecture for computing joint representations of code from both abstract syntax trees and surrounding natural language comments.
We show that our model, leveraging only easily-available weak supervision, achieves a 38% increase in accuracy over expert-supplieds and outperforms a suite of baselines.
arXiv Detail & Related papers (2020-08-28T19:57:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.