Can Workers Meaningfully Consent to Workplace Wellbeing Technologies?
- URL: http://arxiv.org/abs/2303.07242v3
- Date: Fri, 19 May 2023 14:12:57 GMT
- Title: Can Workers Meaningfully Consent to Workplace Wellbeing Technologies?
- Authors: Shreya Chowdhary, Anna Kawakami, Mary L. Gray, Jina Suh, Alexandra
Olteanu, Koustuv Saha
- Abstract summary: This paper unpacks the challenges workers face when consenting to workplace wellbeing technologies.
We show how workers are vulnerable to "meaningless" consent as they may be subject to power dynamics that minimize their ability to withhold consent.
To meaningfully consent, participants wanted changes to the technology and to the policies and practices surrounding the technology.
- Score: 65.15780777033109
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sensing technologies deployed in the workplace can unobtrusively collect
detailed data about individual activities and group interactions that are
otherwise difficult to capture. A hopeful application of these technologies is
that they can help businesses and workers optimize productivity and wellbeing.
However, given the workplace's inherent and structural power dynamics, the
prevalent approach of accepting tacit compliance to monitor work activities
rather than seeking workers' meaningful consent raises privacy and ethical
concerns. This paper unpacks the challenges workers face when consenting to
workplace wellbeing technologies. Using a hypothetical case to prompt
reflection among six multi-stakeholder focus groups involving 15 participants,
we explored participants' expectations and capacity to consent to these
technologies. We sketched possible interventions that could better support
meaningful consent to workplace wellbeing technologies by drawing on critical
computing and feminist scholarship -- which reframes consent from a purely
individual choice to a structural condition experienced at the individual level
that needs to be freely given, reversible, informed, enthusiastic, and specific
(FRIES). The focus groups revealed how workers are vulnerable to "meaningless"
consent -- as they may be subject to power dynamics that minimize their ability
to withhold consent and may thus experience an erosion of autonomy, also
undermining the value of data gathered in the name of "wellbeing." To
meaningfully consent, participants wanted changes to the technology and to the
policies and practices surrounding the technology. Our mapping of what prevents
workers from meaningfully consenting to workplace wellbeing technologies
(challenges) and what they require to do so (interventions) illustrates how the
lack of meaningful consent is a structural problem requiring socio-technical
solutions.
Related papers
- Learning to Assist Humans without Inferring Rewards [65.28156318196397]
We build upon prior work that studies assistance through the lens of empowerment.
An assistive agent aims to maximize the influence of the human's actions.
We prove that these representations estimate a similar notion of empowerment to that studied by prior work.
arXiv Detail & Related papers (2024-11-04T21:31:04Z) - Do Responsible AI Artifacts Advance Stakeholder Goals? Four Key Barriers Perceived by Legal and Civil Stakeholders [59.17981603969404]
The responsible AI (RAI) community has introduced numerous processes and artifacts to facilitate transparency and support the governance of AI systems.
We conduct semi-structured interviews with 19 government, legal, and civil society stakeholders who inform policy and advocacy around responsible AI efforts.
We organize these beliefs into four barriers that help explain how RAI artifacts may (inadvertently) reconfigure power relations across civil society, government, and industry.
arXiv Detail & Related papers (2024-08-22T00:14:37Z) - "Violation of my body:" Perceptions of AI-generated non-consensual (intimate) imagery [22.68931586977199]
AI technology has enabled the creation of deepfakes: hyper-realistic synthetic media.
We surveyed 315 individuals in the U.S. on their views regarding the hypothetical non-consensual creation of deepfakes depicting them.
arXiv Detail & Related papers (2024-06-08T16:57:20Z) - The Ethics of Advanced AI Assistants [53.89899371095332]
This paper focuses on the opportunities and the ethical and societal risks posed by advanced AI assistants.
We define advanced AI assistants as artificial agents with natural language interfaces, whose function is to plan and execute sequences of actions on behalf of a user.
We consider the deployment of advanced assistants at a societal scale, focusing on cooperation, equity and access, misinformation, economic impact, the environment and how best to evaluate advanced AI assistants.
arXiv Detail & Related papers (2024-04-24T23:18:46Z) - Unveiling Technorelief: Enhancing Neurodiverse Collaboration with Media
Capabilities [0.0]
The implications of collaboration on the cognitive, socio-affective experiences of autistic workers are poorly understood.
We ask how digital technologies alleviate autistic workers' experiences of their collaborative work environment.
The resulting "technorelief" enables autistic workers to tune into their perceptions and regain control of their collaborative experiences.
arXiv Detail & Related papers (2023-10-02T07:41:48Z) - The Promise and Peril of Artificial Intelligence -- Violet Teaming
Offers a Balanced Path Forward [56.16884466478886]
This paper reviews emerging issues with opaque and uncontrollable AI systems.
It proposes an integrative framework called violet teaming to develop reliable and responsible AI.
It emerged from AI safety research to manage risks proactively by design.
arXiv Detail & Related papers (2023-08-28T02:10:38Z) - Artificial Intelligence can facilitate selfish decisions by altering the
appearance of interaction partners [2.3208437191245133]
We investigate the potential impact of blur filters, a type of appearance-altering technology, on individuals' behavior towards others.
Our findings consistently demonstrate a significant increase in selfish behavior directed towards individuals whose appearance is blurred.
These results emphasize the need for broader ethical discussions surrounding AI technologies that modify our perception of others.
arXiv Detail & Related papers (2023-06-07T14:53:12Z) - Tensions Between the Proxies of Human Values in AI [20.303537771118048]
We argue that the AI community needs to consider all the consequences of choosing certain formulations of these pillars.
We point towards sociotechnical research for frameworks for the latter, but push for broader efforts into implementing these in practice.
arXiv Detail & Related papers (2022-12-14T21:13:48Z) - Ethics and Efficacy of Unsolicited Anti-Trafficking SMS Outreach [22.968179319673112]
We investigate the use, context, benefits, and harms of an anti-trafficking technology platform in North America.
Our findings illustrate misalignment between developers, users of the platform, and sex industry workers they are attempting to assist.
arXiv Detail & Related papers (2022-02-19T05:12:34Z) - Learning Emotional-Blinded Face Representations [77.7653702071127]
We propose two face representations that are blind to facial expressions associated to emotional responses.
This work is motivated by new international regulations for personal data protection.
arXiv Detail & Related papers (2020-09-18T09:24:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.