Nip it in the Bud: Moderation Strategies in Open Source Software
Projects and the Role of Bots
- URL: http://arxiv.org/abs/2308.07427v1
- Date: Mon, 14 Aug 2023 19:42:51 GMT
- Title: Nip it in the Bud: Moderation Strategies in Open Source Software
Projects and the Role of Bots
- Authors: Jane Hsieh, Joselyn Kim, Laura Dabbish, Haiyi Zhu
- Abstract summary: This study examines the various structures and norms that support community moderation in open source software projects.
We interviewed 14 practitioners to uncover existing moderation practices and ways that automation can provide assistance.
Our main contributions include a characterization of moderated content in OSS projects, moderation techniques, as well as perceptions of and recommendations for improving the automation of moderation tasks.
- Score: 17.02726827353919
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Much of our modern digital infrastructure relies critically upon open sourced
software. The communities responsible for building this cyberinfrastructure
require maintenance and moderation, which is often supported by volunteer
efforts. Moderation, as a non-technical form of labor, is a necessary but often
overlooked task that maintainers undertake to sustain the community around an
OSS project. This study examines the various structures and norms that support
community moderation, describes the strategies moderators use to mitigate
conflicts, and assesses how bots can play a role in assisting these processes.
We interviewed 14 practitioners to uncover existing moderation practices and
ways that automation can provide assistance. Our main contributions include a
characterization of moderated content in OSS projects, moderation techniques,
as well as perceptions of and recommendations for improving the automation of
moderation tasks. We hope that these findings will inform the implementation of
more effective moderation practices in open source communities.
Related papers
- WorkArena++: Towards Compositional Planning and Reasoning-based Common Knowledge Work Tasks [85.95607119635102]
Large language models (LLMs) can mimic human-like intelligence.
WorkArena++ is designed to evaluate the planning, problem-solving, logical/arithmetic reasoning, retrieval, and contextual understanding abilities of web agents.
arXiv Detail & Related papers (2024-07-07T07:15:49Z) - Unicorns Do Not Exist: Employing and Appreciating Community Managers in Open Source [0.0]
Despite playing a crucial role in maintaining open-source software, community managers are often overlooked.
We suggest methods to overcome this by stressing the need for the specialisation of roles.
Following these guidelines can allow this vital role to be treated with the transparency and respect that it deserves.
arXiv Detail & Related papers (2024-06-29T07:23:53Z) - The Responsible Foundation Model Development Cheatsheet: A Review of Tools & Resources [100.23208165760114]
Foundation model development attracts a rapidly expanding body of contributors, scientists, and applications.
To help shape responsible development practices, we introduce the Foundation Model Development Cheatsheet.
arXiv Detail & Related papers (2024-06-24T15:55:49Z) - WorkArena: How Capable Are Web Agents at Solving Common Knowledge Work Tasks? [83.19032025950986]
We study the use of large language model-based agents for interacting with software via web browsers.
WorkArena is a benchmark of 33 tasks based on the widely-used ServiceNow platform.
BrowserGym is an environment for the design and evaluation of such agents.
arXiv Detail & Related papers (2024-03-12T14:58:45Z) - Charting a Path to Efficient Onboarding: The Role of Software
Visualization [49.1574468325115]
The present study aims to explore the familiarity of managers, leaders, and developers with software visualization tools.
This approach incorporated quantitative and qualitative analyses of data collected from practitioners using questionnaires and semi-structured interviews.
arXiv Detail & Related papers (2024-01-17T21:30:45Z) - Toxicity Detection is NOT all you Need: Measuring the Gaps to Supporting Volunteer Content Moderators [19.401873797111662]
We conduct a model review on Hugging Face to reveal the availability of models to cover various moderation rules and guidelines.
We put state-of-the-art LLMs to the test, evaluating how well these models perform in flagging violations of platform rules from one particular forum.
Overall, we observe a non-trivial gap, as missing developed models and LLMs exhibit moderate to low performance on a significant portion of the rules.
arXiv Detail & Related papers (2023-11-14T03:18:28Z) - Enclosed Loops: How open source communities become datasets [2.4269101271105176]
Centralization in code hosting and package management in the 2010s created fundamental shifts in the social arrangements of open source ecosystems.
In this paper we examine Dependabot, Crater and Copilot as three nascent tools whose existence is predicated on centralized software at scale.
arXiv Detail & Related papers (2023-06-09T00:02:25Z) - Proactive Moderation of Online Discussions: Existing Practices and the
Potential for Algorithmic Support [12.515485963557426]
reactive paradigm of taking action against already-posted antisocial content is currently the most common form of moderation.
We explore how automation could assist with this existing proactive moderation workflow by building a prototype tool.
arXiv Detail & Related papers (2022-11-29T19:00:02Z) - Attracting and Retaining OSS Contributors with a Maintainer Dashboard [19.885747206499712]
We design a maintainer dashboard that provides recommendations on how to attract and retain open source contributors.
We conduct a project-specific evaluation with maintainers to better understand use cases in which this tool will be most helpful.
We distill our findings to share what the future of recommendations in open source looks like and how to make these recommendations most meaningful over time.
arXiv Detail & Related papers (2022-02-15T21:39:37Z) - CoreDiag: Eliminating Redundancy in Constraint Sets [68.8204255655161]
We present a new algorithm which can be exploited for the determination of minimal cores (minimal non-redundant constraint sets)
The algorithm is especially useful for distributed knowledge engineering scenarios where the degree of redundancy can become high.
In order to show the applicability of our approach, we present an empirical study conducted with commercial configuration knowledge bases.
arXiv Detail & Related papers (2021-02-24T09:16:10Z) - Online Learning Demands in Max-min Fairness [91.37280766977923]
We describe mechanisms for the allocation of a scarce resource among multiple users in a way that is efficient, fair, and strategy-proof.
The mechanism is repeated for multiple rounds and a user's requirements can change on each round.
At the end of each round, users provide feedback about the allocation they received, enabling the mechanism to learn user preferences over time.
arXiv Detail & Related papers (2020-12-15T22:15:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.