Unpacking Human-AI Interaction in Safety-Critical Industries: A Systematic Literature Review
- URL: http://arxiv.org/abs/2310.03392v2
- Date: Mon, 5 Aug 2024 07:55:50 GMT
- Title: Unpacking Human-AI Interaction in Safety-Critical Industries: A Systematic Literature Review
- Authors: Tita A. Bach, Jenny K. Kristiansen, Aleksandar Babic, Alon Jacovi,
- Abstract summary: No single term is used across the literature to describe HAII.
HAII is most measured with user-related subjective metrics.
Researchers and developers need to codify HAII terminology.
- Score: 48.052150453947405
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ensuring quality human-AI interaction (HAII) in safety-critical industries is essential. Failure to do so can lead to catastrophic and deadly consequences. Despite this urgency, existing research on HAII is limited, fragmented, and inconsistent. We present here a survey of that literature and recommendations for research best practices that should improve the field. We divided our investigation into the following areas: 1) terms used to describe HAII, 2) primary roles of AI-enabled systems, 3) factors that influence HAII, and 4) how HAII is measured. Additionally, we described the capabilities and maturity of the AI-enabled systems used in safety-critical industries discussed in these articles. We found that no single term is used across the literature to describe HAII and some terms have multiple meanings. According to our literature, seven factors influence HAII: user characteristics (e.g., user personality), user perceptions and attitudes (e.g., user biases), user expectations and experience (e.g., mismatched user expectations and experience), AI interface and features (e.g., interactive design), AI output (e.g., perceived accuracy), explainability and interpretability (e.g., level of detail, user understanding), and usage of AI (e.g., heterogeneity of environments). HAII is most measured with user-related subjective metrics (e.g., user perceptions, trust, and attitudes), and AI-assisted decision-making is the most common primary role of AI-enabled systems. Based on this review, we conclude that there are substantial research gaps in HAII. Researchers and developers need to codify HAII terminology, involve users throughout the AI lifecycle (especially during development), and tailor HAII in safety-critical industries to the users and environments.
Related papers
- Human-AI Interaction and User Satisfaction: Empirical Evidence from Online Reviews of AI Products [0.0]
This study analyzes over 100,000 user reviews of AI-related products from G2, a leading review platform for business software and services.
We identify seven core HAI dimensions and examine their coverage and sentiment within the reviews.
We find that the sentiment on four HAI dimensions-adaptability, customization, error recovery, and security-is positively associated with overall user satisfaction.
arXiv Detail & Related papers (2025-03-23T06:16:49Z) - SoK: On the Offensive Potential of AI [14.072632973726906]
More and more evidence shows that AI is also used for offensive purposes.
No extant work has been able to draw a holistic picture of the offensive potential of AI.
arXiv Detail & Related papers (2024-12-24T14:02:44Z) - Raising the Stakes: Performance Pressure Improves AI-Assisted Decision Making [57.53469908423318]
We show the effects of performance pressure on AI advice reliance when laypeople complete a common AI-assisted task.
We find that when the stakes are high, people use AI advice more appropriately than when stakes are lower, regardless of the presence of an AI explanation.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Towards Bidirectional Human-AI Alignment: A Systematic Review for Clarifications, Framework, and Future Directions [101.67121669727354]
Recent advancements in AI have highlighted the importance of guiding AI systems towards the intended goals, ethical principles, and values of individuals and groups, a concept broadly recognized as alignment.
The lack of clarified definitions and scopes of human-AI alignment poses a significant obstacle, hampering collaborative efforts across research domains to achieve this alignment.
We introduce a systematic review of over 400 papers published between 2019 and January 2024, spanning multiple domains such as Human-Computer Interaction (HCI), Natural Language Processing (NLP), Machine Learning (ML)
arXiv Detail & Related papers (2024-06-13T16:03:25Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Predictable Artificial Intelligence [77.1127726638209]
This paper introduces the ideas and challenges of Predictable AI.
It explores the ways in which we can anticipate key validity indicators of present and future AI ecosystems.
We argue that achieving predictability is crucial for fostering trust, liability, control, alignment and safety of AI ecosystems.
arXiv Detail & Related papers (2023-10-09T21:36:21Z) - A New Perspective on Evaluation Methods for Explainable Artificial
Intelligence (XAI) [0.0]
We argue that it is best approached in a nuanced way that incorporates resource availability, domain characteristics, and considerations of risk.
This work aims to advance the field of Requirements Engineering for AI.
arXiv Detail & Related papers (2023-07-26T15:15:44Z) - AI Usage Cards: Responsibly Reporting AI-generated Content [25.848910414962337]
Given AI systems like ChatGPT can generate content that is indistinguishable from human-made work, the responsible use of this technology is a growing concern.
We propose a three-dimensional model consisting of transparency, integrity, and accountability to define the responsible use of AI.
Second, we introduce AI Usage Cards'', a standardized way to report the use of AI in scientific research.
arXiv Detail & Related papers (2023-02-16T08:41:31Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - LioNets: A Neural-Specific Local Interpretation Technique Exploiting
Penultimate Layer Information [6.570220157893279]
Interpretable machine learning (IML) is an urgent topic of research.
This paper focuses on a local-based, neural-specific interpretation process applied to textual and time-series data.
arXiv Detail & Related papers (2021-04-13T09:39:33Z) - The Impact of Explanations on AI Competency Prediction in VQA [3.149760860038061]
We evaluate the impact of explanations on the user's mental model of AI agent competency within the task of visual question answering (VQA)
We introduce an explainable VQA system that uses spatial and object features and is powered by the BERT language model.
arXiv Detail & Related papers (2020-07-02T06:11:28Z) - The Threats of Artificial Intelligence Scale (TAI). Development,
Measurement and Test Over Three Application Domains [0.0]
Several opinion polls frequently query the public fear of autonomous robots and artificial intelligence (FARAI)
We propose a fine-grained scale to measure threat perceptions of AI that accounts for four functional classes of AI systems and is applicable to various domains of AI applications.
The data support the dimensional structure of the proposed Threats of AI (TAI) scale as well as the internal consistency and factoral validity of the indicators.
arXiv Detail & Related papers (2020-06-12T14:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.