Understanding the Factors Influencing Self-Managed Enterprises of Crowdworkers: A Comprehensive Review
- URL: http://arxiv.org/abs/2403.12769v2
- Date: Wed, 20 Mar 2024 21:17:20 GMT
- Title: Understanding the Factors Influencing Self-Managed Enterprises of Crowdworkers: A Comprehensive Review
- Authors: Alexandre Prestes Uchoa, Daniel Schneider,
- Abstract summary: This paper investigates the shift in crowdsourcing towards self-managed enterprises of crowdworkers (SMECs)
It reviews the literature to understand the foundational aspects of this shift, focusing on identifying key factors that may explain the rise of SMECs.
The study aims to guide future research and inform policy and platform development, emphasizing the importance of fair labor practices in this evolving landscape.
- Score: 49.623146117284115
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper investigates the shift in crowdsourcing towards self-managed enterprises of crowdworkers (SMECs), diverging from traditional platform-controlled models. It reviews the literature to understand the foundational aspects of this shift, focusing on identifying key factors that may explain the rise of SMECs, particularly concerning power dynamics and tensions between Online Labor Platforms (OLPs) and crowdworkers. The study aims to guide future research and inform policy and platform development, emphasizing the importance of fair labor practices in this evolving landscape.
Related papers
- Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - The Imperative of Conversation Analysis in the Era of LLMs: A Survey of Tasks, Techniques, and Trends [64.99423243200296]
Conversation Analysis (CA) strives to uncover and analyze critical information from conversation data.
In this paper, we perform a thorough review and systematize CA task to summarize the existing related work.
We derive four key steps of CA from conversation scene reconstruction, to in-depth attribution analysis, and then to performing targeted training, finally generating conversations.
arXiv Detail & Related papers (2024-09-21T16:52:43Z) - Rideshare Transparency: Translating Gig Worker Insights on AI Platform Design to Policy [8.936861276568006]
We characterize transparency-related harms, mitigation strategies, and worker needs.
Our findings expose a transparency gap between existing platform designs and the information drivers need.
New regulations that require platforms to publish public transparency reports may be a more effective solution to improve worker well-being.
arXiv Detail & Related papers (2024-06-16T00:46:49Z) - Position: Foundation Agents as the Paradigm Shift for Decision Making [24.555816843983003]
We advocate for the construction of foundation agents as a transformative shift in the learning paradigm of agents.
We specify the roadmap of foundation agents from large interactive data collection or generation to self-supervised pretraining and adaptation.
arXiv Detail & Related papers (2024-05-27T09:54:50Z) - Occupation Life Cycle [16.618743552104192]
This paper introduces the Occupation Life Cycle (OLC) model to explore the trajectory of occupations.
Using job posting data from one of China's largest recruitment platforms, we track the fluctuations and emerging trends in the labor market from 2018 to 2023.
Our findings offer a unique perspective on the interplay between occupational evolution and economic factors, with a particular focus on the Chinese labor market.
arXiv Detail & Related papers (2024-04-15T03:13:51Z) - Evaluating Interventional Reasoning Capabilities of Large Language Models [58.52919374786108]
Large language models (LLMs) can estimate causal effects under interventions on different parts of a system.
We conduct empirical analyses to evaluate whether LLMs can accurately update their knowledge of a data-generating process in response to an intervention.
We create benchmarks that span diverse causal graphs (e.g., confounding, mediation) and variable types, and enable a study of intervention-based reasoning.
arXiv Detail & Related papers (2024-04-08T14:15:56Z) - LLM as a Mastermind: A Survey of Strategic Reasoning with Large Language Models [75.89014602596673]
Strategic reasoning requires understanding and predicting adversary actions in multi-agent settings while adjusting strategies accordingly.
We explore the scopes, applications, methodologies, and evaluation metrics related to strategic reasoning with Large Language Models.
It underscores the importance of strategic reasoning as a critical cognitive capability and offers insights into future research directions and potential improvements.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Investigating the Impact of Project Risks on Employee Turnover Intentions in the IT Industry of Pakistan [0.0]
This study investigates the influence of project risks in the IT industry on job satisfaction and turnover intentions.
It examines the role of both external and internal social links in shaping perceptions of job satisfaction.
arXiv Detail & Related papers (2024-03-09T11:06:49Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Assessing the Fairness of AI Systems: AI Practitioners' Processes,
Challenges, and Needs for Support [18.148737010217953]
We conduct interviews and workshops with AI practitioners to identify practitioners' processes, challenges, and needs for support.
We find that practitioners face challenges when choosing performance metrics, identifying the most relevant direct stakeholders and demographic groups.
We identify impacts on fairness work stemming from a lack of engagement with direct stakeholders, business imperatives that prioritize customers over marginalized groups, and the drive to deploy AI systems at scale.
arXiv Detail & Related papers (2021-12-10T17:14:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.