How to Gain Commit Rights in Modern Top Open Source Communities?
- URL: http://arxiv.org/abs/2405.01803v3
- Date: Thu, 16 May 2024 10:16:20 GMT
- Title: How to Gain Commit Rights in Modern Top Open Source Communities?
- Authors: Xin Tan, Yan Gong, Geyu Huang, Haohua Wu, Li Zhang,
- Abstract summary: We study the policies and practical implementations of committer qualifications in modern top OSS communities.
We construct a taxonomy of committer qualifications, consisting of 26 codes categorized into nine themes.
We find that the probability of gaining commit rights decreases as participation time passes.
- Score: 14.72524623433377
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The success of open source software (OSS) projects relies on voluntary contributions from various community roles.Being a committer signifies gaining trust and higher privileges. Substantial studies have focused on the requirements of becoming a committer, but most of them are based on interviews or several hypotheses, lacking a comprehensive understanding of committers' qualifications.We explore both the policies and practical implementations of committer qualifications in modern top OSS communities. Through a thematic analysis of these policies, we construct a taxonomy of committer qualifications, consisting of 26 codes categorized into nine themes, including Personnel-related to Project, Communication, and Long-term Participation. We also highlight the variations in committer qualifications emphasized in different OSS community governance models. For example, projects following the core maintainer model value project comprehension, while projects following the company-backed model place significant emphasis on user issue resolution. Then, we propose eight sets of metrics and perform survival analysis on two representative OSS projects to understand how these qualifications are implemented in practice. We find that the probability of gaining commit rights decreases as participation time passes.The selection criteria in practice are generally consistent with the community policies. Developers who submit high-quality code, actively engage in code review, and make extensive contributions to related projects are more likely to be granted commit rights. However, there are some qualifications that do not align precisely, and some are not adequately evaluated. This study contributes to the understanding of trust establishment in modern top OSS communities, assists communities in better allocating commit rights, and supports developers in achieving self-actualization through OSS participation.
Related papers
- Are We on the Same Page? Examining Developer Perception Alignment in Open Source Code Reviews [2.66269503676104]
Code reviews are a critical aspect of open-source software (OSS) development, ensuring quality and fostering collaboration.
This study examines perceptions, challenges, and biases in OSS code review processes, focusing on the perspectives of Contributors and maintainers.
arXiv Detail & Related papers (2025-04-25T15:03:39Z) - Bridging the Gap: A Comparative Study of Academic and Developer Approaches to Smart Contract Vulnerabilities [5.052062767357937]
We investigate the strategies adopted by Solidity developers to fix security vulnerabilities in smart contracts.
From non-aligned commits, we identified 27 novel fixing strategies not previously discussed in the literature.
These emerging patterns offer actionable solutions for securing smart contracts in underexplored areas.
arXiv Detail & Related papers (2025-04-16T19:20:00Z) - A Bot-based Approach to Manage Codes of Conduct in Open-Source Projects [0.3222802562733786]
We propose an approach to effectively manage codes of conduct in OSS projects based on the Contributor Covenant proposal.
Our solution has been implemented as a bot-based solution where bots help in the definition of codes of conduct, the monitoring of OSS projects, and the enforcement of ethical rules.
arXiv Detail & Related papers (2025-03-07T14:50:02Z) - An Overview of Large Language Models for Statisticians [109.38601458831545]
Large Language Models (LLMs) have emerged as transformative tools in artificial intelligence (AI)
This paper explores potential areas where statisticians can make important contributions to the development of LLMs.
We focus on issues such as uncertainty quantification, interpretability, fairness, privacy, watermarking and model adaptation.
arXiv Detail & Related papers (2025-02-25T03:40:36Z) - On the Trustworthiness of Generative Foundation Models: Guideline, Assessment, and Perspective [333.9220561243189]
Generative Foundation Models (GenFMs) have emerged as transformative tools.
Their widespread adoption raises critical concerns regarding trustworthiness across dimensions.
This paper presents a comprehensive framework to address these challenges through three key contributions.
arXiv Detail & Related papers (2025-02-20T06:20:36Z) - A Novel Psychometrics-Based Approach to Developing Professional Competency Benchmark for Large Language Models [0.0]
We propose a comprehensive approach to benchmark development based on rigorous psychometric principles.
We make the first attempt to illustrate this approach by creating a new benchmark in the field of pedagogy and education.
We construct a novel benchmark guided by the Bloom's taxonomy and rigorously designed by a consortium of education experts trained in test development.
arXiv Detail & Related papers (2024-10-29T19:32:43Z) - Benchmarking Large Language Models for Conversational Question Answering in Multi-instructional Documents [61.41316121093604]
We present InsCoQA, a novel benchmark for evaluating large language models (LLMs) in the context of conversational question answering (CQA)
Sourced from extensive, encyclopedia-style instructional content, InsCoQA assesses models on their ability to retrieve, interpret, and accurately summarize procedural guidance from multiple documents.
We also propose InsEval, an LLM-assisted evaluator that measures the integrity and accuracy of generated responses and procedural instructions.
arXiv Detail & Related papers (2024-10-01T09:10:00Z) - CROSS: A Contributor-Project Interaction Lifecycle Model for Open Source Software [2.9631016562930546]
Cross model is a novel contributor-project interaction lifecycle model for open source software.
It explains a range of archetypal cases of contributor engagement and highlights research gaps, especially in EoS/offboarding scenarios.
arXiv Detail & Related papers (2024-09-12T17:57:12Z) - Re-TASK: Revisiting LLM Tasks from Capability, Skill, and Knowledge Perspectives [54.14429346914995]
Chain-of-Thought (CoT) has become a pivotal method for solving complex problems.
Large language models (LLMs) often struggle to accurately decompose domain-specific tasks.
This paper introduces the Re-TASK framework, a novel theoretical model that revisits LLM tasks from the perspectives of capability, skill, and knowledge.
arXiv Detail & Related papers (2024-08-13T13:58:23Z) - The Impossibility of Fair LLMs [59.424918263776284]
The need for fair AI is increasingly clear in the era of large language models (LLMs)
We review the technical frameworks that machine learning researchers have used to evaluate fairness.
We develop guidelines for the more realistic goal of achieving fairness in particular use cases.
arXiv Detail & Related papers (2024-05-28T04:36:15Z) - Code Ownership in Open-Source AI Software Security [18.779538756226298]
We use code ownership metrics to investigate the correlation with latent vulnerabilities across five prominent open-source AI software projects.
The findings suggest a positive relationship between high-level ownership (characterised by a limited number of minor contributors) and a decrease in vulnerabilities.
With these novel code ownership metrics, we have implemented a Python-based command-line application to aid project curators and quality assurance professionals in evaluating and benchmarking their on-site projects.
arXiv Detail & Related papers (2023-12-18T00:37:29Z) - Unveiling Diversity: Empowering OSS Project Leaders with Community
Diversity and Turnover Dashboards [51.67585198094836]
CommunityTapestry is a dynamic real-time community dashboard.
It presents key diversity and turnover signals that we identified from the literature.
It helped project leaders identify areas of improvement and gave them actionable information.
arXiv Detail & Related papers (2023-12-13T22:12:57Z) - Empowering Many, Biasing a Few: Generalist Credit Scoring through Large
Language Models [53.620827459684094]
Large Language Models (LLMs) have great potential for credit scoring tasks, with strong generalization ability across multiple tasks.
We propose the first open-source comprehensive framework for exploring LLMs for credit scoring.
We then propose the first Credit and Risk Assessment Large Language Model (CALM) by instruction tuning, tailored to the nuanced demands of various financial risk assessment tasks.
arXiv Detail & Related papers (2023-10-01T03:50:34Z) - LiSum: Open Source Software License Summarization with Multi-Task
Learning [16.521420821183995]
Open source software (OSS) licenses regulate the conditions under which users can reuse, modify, and distribute the software legally.
There exist various OSS licenses in the community, written in a formal language, which are typically long and complicated to understand.
Motivated by the user study and the fast growth of licenses in the community, we propose the first study towards automated license summarization.
arXiv Detail & Related papers (2023-09-10T16:43:51Z) - Towards a multi-stakeholder value-based assessment framework for
algorithmic systems [76.79703106646967]
We develop a value-based assessment framework that visualizes closeness and tensions between values.
We give guidelines on how to operationalize them, while opening up the evaluation and deliberation process to a wide range of stakeholders.
arXiv Detail & Related papers (2022-05-09T19:28:32Z) - Which contributions count? Analysis of attribution in open source [0.0]
We characterize contributor acknowledgment models in open source by analyzing thousands of projects.
We find that community-generated systems of contribution acknowledgment make work like idea generation or bug finding more visible.
arXiv Detail & Related papers (2021-03-19T20:14:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.