Invisible Labor in Open Source Software Ecosystems
- URL: http://arxiv.org/abs/2401.06889v1
- Date: Fri, 12 Jan 2024 20:52:56 GMT
- Title: Invisible Labor in Open Source Software Ecosystems
- Authors: John Meluso, Amanda Casari, Katie McLaughlin, Milo Z. Trujillo
- Abstract summary: Invisible labor is work that is not fully visible, not appropriately compensated, or both.
Our study shows that roughly half of open source software (OSS) work is invisible.
This suggests that advertising OSS activities as "open" may not make labor visible to most people, but rather lead contributors to overestimate labor visibility.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Invisible labor is work that is not fully visible, not appropriately
compensated, or both. In open source software (OSS) ecosystems, essential tasks
that do not involve code (like content moderation) often become invisible to
the detriment of individuals and organizations. However, invisible labor is so
difficult to measure that we do not know how much of OSS activities are
invisible. Our study addresses this challenge, demonstrating that roughly half
of OSS work is invisible. We do this by developing a survey technique with
cognitive anchoring that measures OSS developer self-assessments of labor
visibility and attribution. Survey respondents (n=142) reported that their work
is more likely to be nonvisible or partially visible (i.e. visible to at most 1
other person) than fully visible (i.e. visible to 2 or more people).
Furthermore, cognitively anchoring participants to the idea of high work
visibility increased perceptions of labor visibility and decreased visibility
importance compared to anchoring to low work visibility. This suggests that
advertising OSS activities as "open" may not make labor visible to most people,
but rather lead contributors to overestimate labor visibility. We therefore add
to a growing body of evidence that designing systems that recognize all kinds
of labor as legitimate contributions is likely to improve fairness in software
development while providing greater transparency into work designs that help
organizations and communities achieve their goals.
Related papers
- Infrared and Visible Image Fusion: From Data Compatibility to Task Adaption [65.06388526722186]
Infrared-visible image fusion is a critical task in computer vision.
There is a lack of recent comprehensive surveys that address this rapidly expanding domain.
We introduce a multi-dimensional framework to elucidate common learning-based IVIF methods.
arXiv Detail & Related papers (2025-01-18T13:17:34Z) - Digital Labor and the Inconspicuous Production of Artificial Intelligence [0.0]
Digital platforms capitalize on users' labor, often disguising essential contributions as casual activities or consumption.
Despite playing a crucial role in driving AI development, such tasks remain largely unrecognized and undercompensated.
This chapter exposes the systemic devaluation of these activities in the digital economy.
arXiv Detail & Related papers (2024-10-08T11:07:42Z) - A Mixed-Methods Study of Open-Source Software Maintainers On Vulnerability Management and Platform Security Features [6.814841205623832]
This paper investigates the perspectives of OSS maintainers on vulnerability management and platform security features.
We find that supply chain mistrust and lack of automation for vulnerability management are the most challenging.
barriers to adopting platform security features include a lack of awareness and the perception that they are not necessary.
arXiv Detail & Related papers (2024-09-12T00:15:03Z) - Unleashing Excellence through Inclusion: Navigating the Engagement-Performance Paradox [0.0]
People who feel that they do not belong (or their voice is not heard at work) commonly become disengaged, unproductive, and pessimistic.
This paper contributes to the literature on quality and performance management by developing a conceptual model of inclusion that directly impacts performance.
arXiv Detail & Related papers (2024-07-13T19:30:01Z) - Evaluating Fairness in Large Vision-Language Models Across Diverse Demographic Attributes and Prompts [27.66626125248612]
We empirically investigate visual fairness in several mainstream large vision-language models (LVLMs)
Our fairness evaluation framework employs direct and single-choice question prompt on visual question-answering/classification tasks.
We propose a potential multi-modal Chain-of-thought (CoT) based strategy for bias mitigation, applicable to both open-source and closed-source LVLMs.
arXiv Detail & Related papers (2024-06-25T23:11:39Z) - What Hides behind Unfairness? Exploring Dynamics Fairness in Reinforcement Learning [52.51430732904994]
In reinforcement learning problems, agents must consider long-term fairness while maximizing returns.
Recent works have proposed many different types of fairness notions, but how unfairness arises in RL problems remains unclear.
We introduce a novel notion called dynamics fairness, which explicitly captures the inequality stemming from environmental dynamics.
arXiv Detail & Related papers (2024-04-16T22:47:59Z) - Rethinking People Analytics With Inverse Transparency by Design [57.67333075002697]
We propose a new design approach for workforce analytics we refer to as inverse transparency by design.
We find that architectural changes are made without inhibiting core functionality.
We conclude that inverse transparency by design is a promising approach to realize accepted and responsible people analytics.
arXiv Detail & Related papers (2023-05-16T21:37:35Z) - Flexible social inference facilitates targeted social learning when
rewards are not observable [58.762004496858836]
Groups coordinate more effectively when individuals are able to learn from others' successes.
We suggest that social inference capacities may help bridge this gap, allowing individuals to update their beliefs about others' underlying knowledge and success from observable trajectories of behavior.
arXiv Detail & Related papers (2022-12-01T21:04:03Z) - Harnessing Context for Budget-Limited Crowdsensing with Massive
Uncertain Workers [26.835745787064337]
We propose a Context-Aware Worker Selection (CAWS) algorithm in this paper.
CAWS aims at maximizing the expected total sensing revenue efficiently with both budget constraint and capacity constraints respected.
arXiv Detail & Related papers (2021-07-03T09:09:07Z) - Trustworthy Transparency by Design [57.67333075002697]
We propose a transparency framework for software design, incorporating research on user trust and experience.
Our framework enables developing software that incorporates transparency in its design.
arXiv Detail & Related papers (2021-03-19T12:34:01Z) - Can Semantic Labels Assist Self-Supervised Visual Representation
Learning? [194.1681088693248]
We present a new algorithm named Supervised Contrastive Adjustment in Neighborhood (SCAN)
In a series of downstream tasks, SCAN achieves superior performance compared to previous fully-supervised and self-supervised methods.
Our study reveals that semantic labels are useful in assisting self-supervised methods, opening a new direction for the community.
arXiv Detail & Related papers (2020-11-17T13:25:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.