Who Wrote this? How Smart Replies Impact Language and Agency in the
Workplace
- URL: http://arxiv.org/abs/2210.06470v1
- Date: Fri, 7 Oct 2022 20:06:25 GMT
- Title: Who Wrote this? How Smart Replies Impact Language and Agency in the
Workplace
- Authors: Kilian Wenker
- Abstract summary: This study uses smart replies (SRs) to show how AI influences humans without any intent on the part of the developer.
I propose a loss of agency theory as a viable approach for studying the impact of AI on human agency.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: AI-mediated communication is designed to help us do our work more quickly and
efficiently. But does it come at a cost? This study uses smart replies (SRs) to
show how AI influences humans without any intent on the part of the developer -
the very use of AI is sufficient. I propose a loss of agency theory as a viable
approach for studying the impact of AI on human agency. I use mixed methods
involving a crowdsourced experiment to test the theory and qualitative
interviews to elucidate non-use of AI. My quantitative results reveal that
machine agency affects the content we author and the behavior we generate. But
it is a non-zero-sum game.
Related papers
- Raising the Stakes: Performance Pressure Improves AI-Assisted Decision Making [57.53469908423318]
We show the effects of performance pressure on AI advice reliance when laypeople complete a common AI-assisted task.
We find that when the stakes are high, people use AI advice more appropriately than when stakes are lower, regardless of the presence of an AI explanation.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Capturing Humans' Mental Models of AI: An Item Response Theory Approach [12.129622383429597]
We show that people expect AI agents' performance to be significantly better on average than the performance of other humans.
Our results indicate that people expect AI agents' performance to be significantly better on average than the performance of other humans.
arXiv Detail & Related papers (2023-05-15T23:17:26Z) - Navigates Like Me: Understanding How People Evaluate Human-Like AI in
Video Games [36.96985093527702]
We collect hundreds of crowd-sourced assessments comparing the human-likeness of navigation behavior generated by our agent and baseline AI agents.
Our proposed agent passes a Turing Test, while the baseline agents do not.
This work provides insights into the characteristics that people consider human-like in the context of goal-directed video game navigation.
arXiv Detail & Related papers (2023-03-02T18:59:04Z) - The Role of Heuristics and Biases During Complex Choices with an AI
Teammate [0.0]
We argue that classic experimental methods are insufficient for studying complex choices made with AI helpers.
We show that framing and anchoring effects impact how people work with an AI helper and are predictive of choice outcomes.
arXiv Detail & Related papers (2023-01-14T20:06:43Z) - On Avoiding Power-Seeking by Artificial Intelligence [93.9264437334683]
We do not know how to align a very intelligent AI agent's behavior with human interests.
I investigate whether we can build smart AI agents which have limited impact on the world, and which do not autonomously seek power.
arXiv Detail & Related papers (2022-06-23T16:56:21Z) - On the Influence of Explainable AI on Automation Bias [0.0]
We aim to shed light on the potential to influence automation bias by explainable AI (XAI)
We conduct an online experiment with regard to hotel review classifications and discuss first results.
arXiv Detail & Related papers (2022-04-19T12:54:23Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - The Threat of Offensive AI to Organizations [52.011307264694665]
This survey explores the threat of offensive AI on organizations.
First, we discuss how AI changes the adversary's methods, strategies, goals, and overall attack model.
Then, through a literature review, we identify 33 offensive AI capabilities which adversaries can use to enhance their attacks.
arXiv Detail & Related papers (2021-06-30T01:03:28Z) - Does the Whole Exceed its Parts? The Effect of AI Explanations on
Complementary Team Performance [44.730580857733]
Prior studies observed improvements from explanations only when the AI, alone, outperformed both the human and the best team.
We conduct mixed-method user studies on three datasets, where an AI with accuracy comparable to humans helps participants solve a task.
We find explanations increase the chance that humans will accept the AI's recommendation, regardless of its correctness.
arXiv Detail & Related papers (2020-06-26T03:34:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.