Artificial Intelligence (AI) and IT identity: Antecedents Identifying
with AI Applications
- URL: http://arxiv.org/abs/2005.12196v1
- Date: Fri, 15 May 2020 10:59:43 GMT
- Title: Artificial Intelligence (AI) and IT identity: Antecedents Identifying
with AI Applications
- Authors: Rasha Alahmad, Lionel Robert
- Abstract summary: replacing managers with AI systems may have a negative impact on workers outcomes.
It is unclear if workers receive the same benefits from their relationships with AI systems.
- Score: 0.76146285961466
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the age of Artificial Intelligence and automation, machines have taken
over many key managerial tasks. Replacing managers with AI systems may have a
negative impact on workers outcomes. It is unclear if workers receive the same
benefits from their relationships with AI systems, raising the question: What
degree does the relationship between AI systems and workers impact worker
outcomes? We draw on IT identity to understand the influence of identification
with AI systems on job performance. From this theoretical perspective, we
propose a research model and conduct a survey of 97 MTurk workers to test the
model. The findings reveal that work role identity and organizational identity
are key determinants of identification with AI systems. Furthermore, the
findings show that identification with AI systems does increase job
performance.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Raising the Stakes: Performance Pressure Improves AI-Assisted Decision Making [57.53469908423318]
We show the effects of performance pressure on AI advice reliance when laypeople complete a common AI-assisted task.
We find that when the stakes are high, people use AI advice more appropriately than when stakes are lower, regardless of the presence of an AI explanation.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Towards the Terminator Economy: Assessing Job Exposure to AI through LLMs [10.844598404826355]
One-third of U.S. employment is highly exposed to AI, primarily in high-skill jobs.
This exposure correlates positively with employment and wage growth from 2019 to 2023.
arXiv Detail & Related papers (2024-07-27T08:14:18Z) - Bending the Automation Bias Curve: A Study of Human and AI-based
Decision Making in National Security Contexts [0.0]
We theorize about the relationship between background knowledge about AI, trust in AI, and how these interact with other factors to influence the probability of automation bias.
We test these in a preregistered task identification experiment across a representative sample of 9000 adults in 9 countries with varying levels of AI industries.
arXiv Detail & Related papers (2023-06-28T18:57:36Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - BIASeD: Bringing Irrationality into Automated System Design [12.754146668390828]
We claim that the future of human-machine collaboration will entail the development of AI systems that model, understand and possibly replicate human cognitive biases.
We categorize existing cognitive biases from the perspective of AI systems, identify three broad areas of interest and outline research directions for the design of AI systems that have a better understanding of our own biases.
arXiv Detail & Related papers (2022-10-01T02:52:38Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Intelligent Decision Assistance Versus Automated Decision-Making:
Enhancing Knowledge Work Through Explainable Artificial Intelligence [0.0]
We propose a new class of DSS, namely Intelligent Decision Assistance (IDA)
IDA supports knowledge workers without influencing them through automated decision-making.
Specifically, we propose to use techniques of Explainable AI (XAI) while withholding concrete AI recommendations.
arXiv Detail & Related papers (2021-09-28T15:57:21Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - The MineRL BASALT Competition on Learning from Human Feedback [58.17897225617566]
The MineRL BASALT competition aims to spur forward research on this important class of techniques.
We design a suite of four tasks in Minecraft for which we expect it will be hard to write down hardcoded reward functions.
We provide a dataset of human demonstrations on each of the four tasks, as well as an imitation learning baseline.
arXiv Detail & Related papers (2021-07-05T12:18:17Z) - The human-AI relationship in decision-making: AI explanation to support
people on justifying their decisions [4.169915659794568]
People need more awareness of how AI works and its outcomes to build a relationship with that system.
In decision-making scenarios, people need more awareness of how AI works and its outcomes to build a relationship with that system.
arXiv Detail & Related papers (2021-02-10T14:28:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.