Do We Need Explainable AI in Companies? Investigation of Challenges,
Expectations, and Chances from Employees' Perspective
- URL: http://arxiv.org/abs/2210.03527v2
- Date: Fri, 2 Jun 2023 12:36:44 GMT
- Title: Do We Need Explainable AI in Companies? Investigation of Challenges,
Expectations, and Chances from Employees' Perspective
- Authors: Katharina Weitz, Chi Tai Dang, Elisabeth Andr\'e
- Abstract summary: Using AI poses new requirements for companies and their employees, including transparency and comprehensibility of AI systems.
The field of Explainable AI (XAI) aims to address these issues.
This project report paper provides insights into employees' needs and attitudes towards (X)AI.
- Score: 0.8057006406834467
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Companies' adoption of artificial intelligence (AI) is increasingly becoming
an essential element of business success. However, using AI poses new
requirements for companies and their employees, including transparency and
comprehensibility of AI systems. The field of Explainable AI (XAI) aims to
address these issues. Yet, the current research primarily consists of
laboratory studies, and there is a need to improve the applicability of the
findings to real-world situations. Therefore, this project report paper
provides insights into employees' needs and attitudes towards (X)AI. For this,
we investigate employees' perspectives on (X)AI. Our findings suggest that AI
and XAI are well-known terms perceived as important for employees. This
recognition is a critical first step for XAI to potentially drive successful
usage of AI by providing comprehensible insights into AI technologies. In a
lessons-learned section, we discuss the open questions identified and suggest
future research directions to develop human-centered XAI designs for companies.
By providing insights into employees' needs and attitudes towards (X)AI, our
project report contributes to the development of XAI solutions that meet the
requirements of companies and their employees, ultimately driving the
successful adoption of AI technologies in the business context.
Related papers
- Improving Health Professionals' Onboarding with AI and XAI for Trustworthy Human-AI Collaborative Decision Making [3.2381492754749632]
We present the findings of semi-structured interviews with health professionals and students majoring in medicine and health.
For the interviews, we built upon human-AI interaction guidelines to create materials of an AI system for stroke rehabilitation assessment.
Our findings reveal that beyond presenting traditional performance metrics on AI, participants desired benchmark information.
arXiv Detail & Related papers (2024-05-26T04:30:17Z) - The Ethics of Advanced AI Assistants [53.89899371095332]
This paper focuses on the opportunities and the ethical and societal risks posed by advanced AI assistants.
We define advanced AI assistants as artificial agents with natural language interfaces, whose function is to plan and execute sequences of actions on behalf of a user.
We consider the deployment of advanced assistants at a societal scale, focusing on cooperation, equity and access, misinformation, economic impact, the environment and how best to evaluate advanced AI assistants.
arXiv Detail & Related papers (2024-04-24T23:18:46Z) - AI for IT Operations (AIOps) on Cloud Platforms: Reviews, Opportunities
and Challenges [60.56413461109281]
Artificial Intelligence for IT operations (AIOps) aims to combine the power of AI with the big data generated by IT Operations processes.
We discuss in depth the key types of data emitted by IT Operations activities, the scale and challenges in analyzing them, and where they can be helpful.
We categorize the key AIOps tasks as - incident detection, failure prediction, root cause analysis and automated actions.
arXiv Detail & Related papers (2023-04-10T15:38:12Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Building AI Innovation Labs together with Companies [5.316377874936118]
In the future, most companies will be confronted with the topic of Artificial Intelligence (AI) and will have to decide on their strategy.
One of the biggest challenges lies in coming up with innovative solution ideas with a clear business value.
This requires business competencies on the one hand and technical competencies in AI and data analytics on the other.
arXiv Detail & Related papers (2022-03-16T08:45:52Z) - Towards Explainable Artificial Intelligence in Banking and Financial
Services [0.0]
We study and analyze the recent work done in Explainable Artificial Intelligence (XAI) methods and tools.
We introduce a novel XAI process, which facilitates producing explainable models while maintaining a high level of learning performance.
We develop a digital dashboard to facilitate interacting with the algorithm results.
arXiv Detail & Related papers (2021-12-14T08:02:13Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Designer-User Communication for XAI: An epistemological approach to
discuss XAI design [4.169915659794568]
We take the Signifying Message as our conceptual tool to structure and discuss XAI scenarios.
We experiment with its use for the discussion of a healthcare AI-System.
arXiv Detail & Related papers (2021-05-17T13:18:57Z) - Questioning the AI: Informing Design Practices for Explainable AI User
Experiences [33.81809180549226]
A surge of interest in explainable AI (XAI) has led to a vast collection of algorithmic work on the topic.
We seek to identify gaps between the current XAI algorithmic work and practices to create explainable AI products.
We develop an algorithm-informed XAI question bank in which user needs for explainability are represented.
arXiv Detail & Related papers (2020-01-08T12:34:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.