Making Data: The Work Behind Artificial Intelligence
- URL: http://arxiv.org/abs/2410.03694v1
- Date: Mon, 23 Sep 2024 08:12:24 GMT
- Title: Making Data: The Work Behind Artificial Intelligence
- Authors: Matheus Viana Braz, Paola Tubaro, Antonio A. Casilli,
- Abstract summary: This paper documents the conditions of microwork in Brazil and offers portraits of the workers.
It is a step in the wider effort to overcome the current state of invisibilization.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: AI generates both enthusiasm and disillusionment, with promises that often go unfulfilled. It is therefore not surprising that human labor, which is its fundamental component, is also subject to these same deceptions. The development of "smart technologies" depends, at different stages, on a multitude of precarious, underpaid and invisible workers, who, dispersed globally, carry out repetitive, fragmented activities, paid per task and completed in a few seconds. These are workers who label data to train algorithms, through tasks that require the intuitive, creative and cognitive abilities of human beings, such as categorizing images, classifying advertisements, transcribing audio and video, evaluating advertisements, moderating content on social media, labeling human anatomical points of interest, digitizing documents, etc. This form of work is often referred to as "microwork". Our contribution, which documents the conditions of microwork in Brazil and offers portraits of the workers, is a step in the wider effort to overcome the current state of invisibilization. It opens up avenues for future research, with the aim of better characterizing this new form of work, tracing its changes over time in relation to the dynamics of globalization and, ideally, identifying levers for action and transitions.
Related papers
- Digital Labor and the Inconspicuous Production of Artificial Intelligence [0.0]
Digital platforms capitalize on users' labor, often disguising essential contributions as casual activities or consumption.
Despite playing a crucial role in driving AI development, such tasks remain largely unrecognized and undercompensated.
This chapter exposes the systemic devaluation of these activities in the digital economy.
arXiv Detail & Related papers (2024-10-08T11:07:42Z) - Self-Explainable Affordance Learning with Embodied Caption [63.88435741872204]
We introduce Self-Explainable Affordance learning (SEA) with embodied caption.
SEA enables robots to articulate their intentions and bridge the gap between explainable vision-language caption and visual affordance learning.
We propose a novel model to effectively combine affordance grounding with self-explanation in a simple but efficient manner.
arXiv Detail & Related papers (2024-04-08T15:22:38Z) - Detection of Machine-Generated Text: Literature Survey [0.0]
This literature survey aims to compile and synthesize accomplishments and developments in the field of machine-generated text.
It also gives an overview of machine-generated text trends and explores the larger societal implications.
arXiv Detail & Related papers (2024-01-02T01:44:15Z) - Human-oriented Representation Learning for Robotic Manipulation [64.59499047836637]
Humans inherently possess generalizable visual representations that empower them to efficiently explore and interact with the environments in manipulation tasks.
We formalize this idea through the lens of human-oriented multi-task fine-tuning on top of pre-trained visual encoders.
Our Task Fusion Decoder consistently improves the representation of three state-of-the-art visual encoders for downstream manipulation policy-learning.
arXiv Detail & Related papers (2023-10-04T17:59:38Z) - RH20T: A Comprehensive Robotic Dataset for Learning Diverse Skills in
One-Shot [56.130215236125224]
A key challenge in robotic manipulation in open domains is how to acquire diverse and generalizable skills for robots.
Recent research in one-shot imitation learning has shown promise in transferring trained policies to new tasks based on demonstrations.
This paper aims to unlock the potential for an agent to generalize to hundreds of real-world skills with multi-modal perception.
arXiv Detail & Related papers (2023-07-02T15:33:31Z) - WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model [74.4875156387271]
We develop a novel foundation model pre-trained with huge multimodal (visual and textual) data.
We show that state-of-the-art results can be obtained on a wide range of downstream tasks.
arXiv Detail & Related papers (2021-10-27T12:25:21Z) - SKID RAW: Skill Discovery from Raw Trajectories [23.871402375721285]
It is desirable to only demonstrate full task executions instead of all individual skills.
We propose a novel approach that simultaneously learns to segment trajectories into reoccurring patterns.
The approach learns a skill conditioning that can be used to understand possible sequences of skills.
arXiv Detail & Related papers (2021-03-26T17:27:13Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z) - LEMMA: A Multi-view Dataset for Learning Multi-agent Multi-task
Activities [119.88381048477854]
We introduce the LEMMA dataset to provide a single home to address missing dimensions with meticulously designed settings.
We densely annotate the atomic-actions with human-object interactions to provide ground-truths of the compositionality, scheduling, and assignment of daily activities.
We hope this effort would drive the machine vision community to examine goal-directed human activities and further study the task scheduling and assignment in the real world.
arXiv Detail & Related papers (2020-07-31T00:13:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.