Invisible Labor in Open Source Software Ecosystems
- URL: http://arxiv.org/abs/2401.06889v1
- Date: Fri, 12 Jan 2024 20:52:56 GMT
- Title: Invisible Labor in Open Source Software Ecosystems
- Authors: John Meluso, Amanda Casari, Katie McLaughlin, Milo Z. Trujillo
- Abstract summary: Invisible labor is work that is not fully visible, not appropriately compensated, or both.
Our study shows that roughly half of open source software (OSS) work is invisible.
This suggests that advertising OSS activities as "open" may not make labor visible to most people, but rather lead contributors to overestimate labor visibility.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Invisible labor is work that is not fully visible, not appropriately
compensated, or both. In open source software (OSS) ecosystems, essential tasks
that do not involve code (like content moderation) often become invisible to
the detriment of individuals and organizations. However, invisible labor is so
difficult to measure that we do not know how much of OSS activities are
invisible. Our study addresses this challenge, demonstrating that roughly half
of OSS work is invisible. We do this by developing a survey technique with
cognitive anchoring that measures OSS developer self-assessments of labor
visibility and attribution. Survey respondents (n=142) reported that their work
is more likely to be nonvisible or partially visible (i.e. visible to at most 1
other person) than fully visible (i.e. visible to 2 or more people).
Furthermore, cognitively anchoring participants to the idea of high work
visibility increased perceptions of labor visibility and decreased visibility
importance compared to anchoring to low work visibility. This suggests that
advertising OSS activities as "open" may not make labor visible to most people,
but rather lead contributors to overestimate labor visibility. We therefore add
to a growing body of evidence that designing systems that recognize all kinds
of labor as legitimate contributions is likely to improve fairness in software
development while providing greater transparency into work designs that help
organizations and communities achieve their goals.
Related papers
- Invisible Labor: The Backbone of Open Source Software [0.9374652839580183]
Open source software (OSS) is software that is viewable, editable and shareable by anyone with internet access.
We interviewed OSS contributors and asked them about their invisible labor contributions, leadership departure, membership turnover and sustainability.
We found that invisible labor is responsible for good leadership, reducing contributor turnover, and creating legitimacy for the project as an organization.
arXiv Detail & Related papers (2025-03-17T17:34:45Z) - Infrared and Visible Image Fusion: From Data Compatibility to Task Adaption [65.06388526722186]
Infrared-visible image fusion is a critical task in computer vision.
There is a lack of recent comprehensive surveys that address this rapidly expanding domain.
We introduce a multi-dimensional framework to elucidate common learning-based IVIF methods.
arXiv Detail & Related papers (2025-01-18T13:17:34Z) - Revealing and Reducing Gender Biases in Vision and Language Assistants (VLAs) [82.57490175399693]
We study gender bias in 22 popular image-to-text vision-language assistants (VLAs)
Our results show that VLAs replicate human biases likely present in the data, such as real-world occupational imbalances.
To eliminate the gender bias in these models, we find that finetuning-based debiasing methods achieve the best tradeoff between debiasing and retaining performance on downstream tasks.
arXiv Detail & Related papers (2024-10-25T05:59:44Z) - Digital Labor and the Inconspicuous Production of Artificial Intelligence [0.0]
Digital platforms capitalize on users' labor, often disguising essential contributions as casual activities or consumption.
Despite playing a crucial role in driving AI development, such tasks remain largely unrecognized and undercompensated.
This chapter exposes the systemic devaluation of these activities in the digital economy.
arXiv Detail & Related papers (2024-10-08T11:07:42Z) - A Mixed-Methods Study of Open-Source Software Maintainers On Vulnerability Management and Platform Security Features [6.814841205623832]
This paper investigates the perspectives of OSS maintainers on vulnerability management and platform security features.
We find that supply chain mistrust and lack of automation for vulnerability management are the most challenging.
barriers to adopting platform security features include a lack of awareness and the perception that they are not necessary.
arXiv Detail & Related papers (2024-09-12T00:15:03Z) - Unleashing Excellence through Inclusion: Navigating the Engagement-Performance Paradox [0.0]
People who feel that they do not belong (or their voice is not heard at work) commonly become disengaged, unproductive, and pessimistic.
This paper contributes to the literature on quality and performance management by developing a conceptual model of inclusion that directly impacts performance.
arXiv Detail & Related papers (2024-07-13T19:30:01Z) - What Hides behind Unfairness? Exploring Dynamics Fairness in Reinforcement Learning [52.51430732904994]
In reinforcement learning problems, agents must consider long-term fairness while maximizing returns.
Recent works have proposed many different types of fairness notions, but how unfairness arises in RL problems remains unclear.
We introduce a novel notion called dynamics fairness, which explicitly captures the inequality stemming from environmental dynamics.
arXiv Detail & Related papers (2024-04-16T22:47:59Z) - AesBench: An Expert Benchmark for Multimodal Large Language Models on
Image Aesthetics Perception [64.25808552299905]
AesBench is an expert benchmark aiming to comprehensively evaluate the aesthetic perception capacities of MLLMs.
We construct an Expert-labeled Aesthetics Perception Database (EAPD), which features diversified image contents and high-quality annotations provided by professional aesthetic experts.
We propose a set of integrative criteria to measure the aesthetic perception abilities of MLLMs from four perspectives, including Perception (AesP), Empathy (AesE), Assessment (AesA) and Interpretation (AesI)
arXiv Detail & Related papers (2024-01-16T10:58:07Z) - Rethinking People Analytics With Inverse Transparency by Design [57.67333075002697]
We propose a new design approach for workforce analytics we refer to as inverse transparency by design.
We find that architectural changes are made without inhibiting core functionality.
We conclude that inverse transparency by design is a promising approach to realize accepted and responsible people analytics.
arXiv Detail & Related papers (2023-05-16T21:37:35Z) - Flexible social inference facilitates targeted social learning when
rewards are not observable [58.762004496858836]
Groups coordinate more effectively when individuals are able to learn from others' successes.
We suggest that social inference capacities may help bridge this gap, allowing individuals to update their beliefs about others' underlying knowledge and success from observable trajectories of behavior.
arXiv Detail & Related papers (2022-12-01T21:04:03Z) - Efficient Visual Recognition with Deep Neural Networks: A Survey on
Recent Advances and New Directions [37.914102870280324]
Deep neural networks (DNNs) have largely boosted their performances on many concrete tasks.
Deep neural networks (DNNs) have largely boosted their performances on many concrete tasks.
This paper presents the review of the recent advances with our suggestions on the new possible directions.
arXiv Detail & Related papers (2021-08-30T08:19:34Z) - Harnessing Context for Budget-Limited Crowdsensing with Massive
Uncertain Workers [26.835745787064337]
We propose a Context-Aware Worker Selection (CAWS) algorithm in this paper.
CAWS aims at maximizing the expected total sensing revenue efficiently with both budget constraint and capacity constraints respected.
arXiv Detail & Related papers (2021-07-03T09:09:07Z) - Trustworthy Transparency by Design [57.67333075002697]
We propose a transparency framework for software design, incorporating research on user trust and experience.
Our framework enables developing software that incorporates transparency in its design.
arXiv Detail & Related papers (2021-03-19T12:34:01Z) - Heterogeneous Contrastive Learning: Encoding Spatial Information for
Compact Visual Representations [183.03278932562438]
This paper presents an effective approach that adds spatial information to the encoding stage to alleviate the learning inconsistency between the contrastive objective and strong data augmentation operations.
We show that our approach achieves higher efficiency in visual representations and thus delivers a key message to inspire the future research of self-supervised visual representation learning.
arXiv Detail & Related papers (2020-11-19T16:26:25Z) - Can Semantic Labels Assist Self-Supervised Visual Representation
Learning? [194.1681088693248]
We present a new algorithm named Supervised Contrastive Adjustment in Neighborhood (SCAN)
In a series of downstream tasks, SCAN achieves superior performance compared to previous fully-supervised and self-supervised methods.
Our study reveals that semantic labels are useful in assisting self-supervised methods, opening a new direction for the community.
arXiv Detail & Related papers (2020-11-17T13:25:00Z) - Dark, Beyond Deep: A Paradigm Shift to Cognitive AI with Humanlike
Common Sense [142.53911271465344]
We argue that the next generation of AI must embrace "dark" humanlike common sense for solving novel tasks.
We identify functionality, physics, intent, causality, and utility (FPICU) as the five core domains of cognitive AI with humanlike common sense.
arXiv Detail & Related papers (2020-04-20T04:07:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.