Are We on the Same Page? Examining Developer Perception Alignment in Open Source Code Reviews
- URL: http://arxiv.org/abs/2504.18407v1
- Date: Fri, 25 Apr 2025 15:03:39 GMT
- Title: Are We on the Same Page? Examining Developer Perception Alignment in Open Source Code Reviews
- Authors: Yoseph Berhanu Alebachew, Minhyuk Ko, Chris Brown,
- Abstract summary: Code reviews are a critical aspect of open-source software (OSS) development, ensuring quality and fostering collaboration.<n>This study examines perceptions, challenges, and biases in OSS code review processes, focusing on the perspectives of Contributors and maintainers.
- Score: 2.66269503676104
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Code reviews are a critical aspect of open-source software (OSS) development, ensuring quality and fostering collaboration. This study examines perceptions, challenges, and biases in OSS code review processes, focusing on the perspectives of Contributors and Maintainers. Through surveys (n=289), interviews (n=23), and repository analysis (n=81), we identify key areas of alignment and disparity. While both groups share common objectives, differences emerge in priorities, e.g, with Maintainers emphasizing alignment with project goals while Contributors overestimated the value of novelty. Bias, particularly familiarity bias, disproportionately affects underrepresented groups, discouraging participation and limiting community growth. Misinterpretation of approach differences as bias further complicates reviews. Our findings underscore the need for improved documentation, better tools, and automated solutions to address delays and enhance inclusivity. This work provides actionable strategies to promote fairness and sustain the long-term innovation of OSS ecosystems.
Related papers
- Automatic Bias Detection in Source Code Review [2.3480418671346164]
We propose a controlled experiment to detect potentially biased outcomes in code reviews by observing how reviewers interact with the code.<n>We employ the "spotlight model of attention", a cognitive framework where a reviewer's gaze is tracked to determine their focus areas on the review screen.<n>We plan to analyze the sequence of gaze focus using advanced sequence modeling techniques, including Markov Models, Recurrent Neural Networks (RNNs), and Conditional Random Fields (CRF)
arXiv Detail & Related papers (2025-04-25T16:01:52Z) - Identifying Aspects in Peer Reviews [61.374437855024844]
We develop a data-driven schema for deriving fine-grained aspects from a corpus of peer reviews.<n>We introduce a dataset of peer reviews augmented with aspects and show how it can be used for community-level review analysis.
arXiv Detail & Related papers (2025-04-09T14:14:42Z) - Quantifying the Cross-sectoral Intersecting Discrepancies within Multiple Groups Using Latent Class Analysis Towards Fairness [6.683051393349788]
''Leave No One Behind'' initiative urges us to address multiple and intersecting forms of inequality in accessing services, resources, and opportunities.
An increasing number of AI tools are applied to decision-making processes in various sectors such as health, energy, and housing.
This research introduces an innovative approach to quantify cross-sectoral intersecting discrepancies.
arXiv Detail & Related papers (2024-05-24T08:10:31Z) - How to Gain Commit Rights in Modern Top Open Source Communities? [14.72524623433377]
We study the policies and practical implementations of committer qualifications in modern top OSS communities.
We construct a taxonomy of committer qualifications, consisting of 26 codes categorized into nine themes.
We find that the probability of gaining commit rights decreases as participation time passes.
arXiv Detail & Related papers (2024-05-03T01:23:06Z) - Fairness meets Cross-Domain Learning: a new perspective on Models and
Metrics [80.07271410743806]
We study the relationship between cross-domain learning (CD) and model fairness.
We introduce a benchmark on face and medical images spanning several demographic groups as well as classification and localization tasks.
Our study covers 14 CD approaches alongside three state-of-the-art fairness algorithms and shows how the former can outperform the latter.
arXiv Detail & Related papers (2023-03-25T09:34:05Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - Supporting the Task-driven Skill Identification in Open Source Project
Issue Tracking Systems [0.0]
We investigate the automatic labeling of open issues strategy to help the contributors to pick a task to contribute.
By identifying the skills, we claim the contributor candidates should pick a task more suitable.
We applied quantitative studies to analyze the relevance of the labels in an experiment and compare the strategies' relative importance.
arXiv Detail & Related papers (2022-11-02T14:17:22Z) - Assaying Out-Of-Distribution Generalization in Transfer Learning [103.57862972967273]
We take a unified view of previous work, highlighting message discrepancies that we address empirically.
We fine-tune over 31k networks, from nine different architectures in the many- and few-shot setting.
arXiv Detail & Related papers (2022-07-19T12:52:33Z) - Generative multitask learning mitigates target-causing confounding [61.21582323566118]
We propose a simple and scalable approach to causal representation learning for multitask learning.
The improvement comes from mitigating unobserved confounders that cause the targets, but not the input.
Our results on the Attributes of People and Taskonomy datasets reflect the conceptual improvement in robustness to prior probability shift.
arXiv Detail & Related papers (2022-02-08T20:42:14Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.