AI-in-the-Loop -- The impact of HMI in AI-based Application
- URL: http://arxiv.org/abs/2303.11508v1
- Date: Tue, 21 Mar 2023 00:04:33 GMT
- Title: AI-in-the-Loop -- The impact of HMI in AI-based Application
- Authors: Julius Sch\"oning and Clemens Westerkamp
- Abstract summary: We introduce the AI-in-the-loop concept that combines AI's and humans' strengths.
By enabling HMI during an AI uses inference, we will introduce the AI-in-the-loop concept that combines AI's and humans' strengths.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial intelligence (AI) and human-machine interaction (HMI) are two
keywords that usually do not fit embedded applications. Within the steps needed
before applying AI to solve a specific task, HMI is usually missing during the
AI architecture design and the training of an AI model. The human-in-the-loop
concept is prevalent in all other steps of developing AI, from data analysis
via data selection and cleaning to performance evaluation. During AI
architecture design, HMI can immediately highlight unproductive layers of the
architecture so that lightweight network architecture for embedded applications
can be created easily. We show that by using this HMI, users can instantly
distinguish which AI architecture should be trained and evaluated first since a
high accuracy on the task could be expected. This approach reduces the
resources needed for AI development by avoiding training and evaluating AI
architectures with unproductive layers and leads to lightweight AI
architectures. These resulting lightweight AI architectures will enable HMI
while running the AI on an edge device. By enabling HMI during an AI uses
inference, we will introduce the AI-in-the-loop concept that combines AI's and
humans' strengths. In our AI-in-the-loop approach, the AI remains the working
horse and primarily solves the task. If the AI is unsure whether its inference
solves the task correctly, it asks the user to use an appropriate HMI.
Consequently, AI will become available in many applications soon since HMI will
make AI more reliable and explainable.
Related papers
- The Model Mastery Lifecycle: A Framework for Designing Human-AI Interaction [0.0]
The utilization of AI in an increasing number of fields is the latest iteration of a long process.
There is an urgent need for methods to determine how AI should be used in different situations.
arXiv Detail & Related papers (2024-08-23T01:00:32Z) - Bootstrapping Developmental AIs: From Simple Competences to Intelligent
Human-Compatible AIs [0.0]
The mainstream AIs approaches are the generative and deep learning approaches with large language models (LLMs) and the manually constructed symbolic approach.
This position paper lays out the prospects, gaps, and challenges for extending the practice of developmental AIs to create resilient, intelligent, and human-compatible AIs.
arXiv Detail & Related papers (2023-08-08T21:14:21Z) - LeanAI: A method for AEC practitioners to effectively plan AI
implementations [1.213096549055645]
Despite the enthusiasm regarding the use of AI, 85% of current big data projects fail.
One of the main reasons for AI project failures in the AEC industry is the disconnect between those who plan or decide to use AI and those who implement it.
This work introduces the LeanAI method, which delineates what AI should solve, what it can solve, and what it will solve.
arXiv Detail & Related papers (2023-06-29T09:18:11Z) - Prompt Sapper: LLM-Empowered Software Engineering Infrastructure for
AI-Native Services [37.05145017386908]
Prompt Sapper is committed to support the development of AI-native services by AI chain engineering.
It creates a large language model (LLM) empowered software engineering infrastructure for authoring AI chains through human-AI collaborative intelligence.
This article will introduce the R&D motivation behind Prompt Sapper, along with its corresponding AI chain engineering methodology and technical practices.
arXiv Detail & Related papers (2023-06-04T01:47:42Z) - End-User Development for Artificial Intelligence: A Systematic
Literature Review [2.347942013388615]
End-User Development (EUD) can allow people to create, customize, or adapt AI-based systems to their own needs.
This paper presents a literature review that aims to shed the light on the current landscape of EUD for AI systems.
arXiv Detail & Related papers (2023-04-14T09:57:36Z) - HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging
Face [85.25054021362232]
Large language models (LLMs) have exhibited exceptional abilities in language understanding, generation, interaction, and reasoning.
LLMs could act as a controller to manage existing AI models to solve complicated AI tasks.
We present HuggingGPT, an LLM-powered agent that connects various AI models in machine learning communities.
arXiv Detail & Related papers (2023-03-30T17:48:28Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - The MineRL BASALT Competition on Learning from Human Feedback [58.17897225617566]
The MineRL BASALT competition aims to spur forward research on this important class of techniques.
We design a suite of four tasks in Minecraft for which we expect it will be hard to write down hardcoded reward functions.
We provide a dataset of human demonstrations on each of the four tasks, as well as an imitation learning baseline.
arXiv Detail & Related papers (2021-07-05T12:18:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.