An Interactive Explanatory AI System for Industrial Quality Control
- URL: http://arxiv.org/abs/2203.09181v1
- Date: Thu, 17 Mar 2022 09:04:46 GMT
- Title: An Interactive Explanatory AI System for Industrial Quality Control
- Authors: Dennis M\"uller, Michael M\"arz, Stephan Scheele, Ute Schmid
- Abstract summary: We aim to extend the defect detection task towards an interactive human-in-the-loop approach.
We propose an approach for an interactive support system for classifications in an industrial quality control setting.
- Score: 0.8889304968879161
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning based image classification algorithms, such as deep neural
network approaches, will be increasingly employed in critical settings such as
quality control in industry, where transparency and comprehensibility of
decisions are crucial. Therefore, we aim to extend the defect detection task
towards an interactive human-in-the-loop approach that allows us to integrate
rich background knowledge and the inference of complex relationships going
beyond traditional purely data-driven approaches. We propose an approach for an
interactive support system for classifications in an industrial quality control
setting that combines the advantages of both (explainable) knowledge-driven and
data-driven machine learning methods, in particular inductive logic programming
and convolutional neural networks, with human expertise and control. The
resulting system can assist domain experts with decisions, provide transparent
explanations for results, and integrate feedback from users; thus reducing
workload for humans while both respecting their expertise and without removing
their agency or accountability.
Related papers
- Analyzing Operator States and the Impact of AI-Enhanced Decision Support
in Control Rooms: A Human-in-the-Loop Specialized Reinforcement Learning
Framework for Intervention Strategies [0.9378955659006951]
In complex industrial and chemical process control rooms, effective decision-making is crucial for safety andeffi- ciency.
The experiments in this paper evaluate the impact and applications of an AI-based decision support system integrated into an improved human-machine interface.
arXiv Detail & Related papers (2024-02-20T18:31:27Z) - Path To Gain Functional Transparency In Artificial Intelligence With
Meaningful Explainability [0.0]
As AI systems become increasingly sophisticated, ensuring their transparency and explainability becomes crucial.
We propose a design for user-centered compliant-by-design transparency in transparent systems.
By providing a comprehensive understanding of the challenges associated with transparency in AI systems, we aim to facilitate the development of AI systems that are accountable, trustworthy, and aligned with societal values.
arXiv Detail & Related papers (2023-10-13T04:25:30Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Interpreting Neural Policies with Disentangled Tree Representations [58.769048492254555]
We study interpretability of compact neural policies through the lens of disentangled representation.
We leverage decision trees to obtain factors of variation for disentanglement in robot learning.
We introduce interpretability metrics that measure disentanglement of learned neural dynamics.
arXiv Detail & Related papers (2022-10-13T01:10:41Z) - Synergistic information supports modality integration and flexible
learning in neural networks solving multiple tasks [107.8565143456161]
We investigate the information processing strategies adopted by simple artificial neural networks performing a variety of cognitive tasks.
Results show that synergy increases as neural networks learn multiple diverse tasks.
randomly turning off neurons during training through dropout increases network redundancy, corresponding to an increase in robustness.
arXiv Detail & Related papers (2022-10-06T15:36:27Z) - An Interactive Interpretability System for Breast Cancer Screening with
Deep Learning [11.28741778902131]
We propose an interactive system to take advantage of state-of-the-art interpretability techniques to assist radiologists with breast cancer screening.
Our system integrates a deep learning model into the radiologists' workflow and provides novel interactions to promote understanding of the model's decision-making process.
arXiv Detail & Related papers (2022-09-30T02:19:49Z) - Fusing Interpretable Knowledge of Neural Network Learning Agents For
Swarm-Guidance [0.5156484100374059]
Neural-based learning agents make decisions using internal artificial neural networks.
In certain situations, it becomes pertinent that this knowledge is re-interpreted in a friendly form to both the human and the machine.
We propose an interpretable knowledge fusion framework suited for neural-based learning agents, and propose a Priority on Weak State Areas (PoWSA) retraining technique.
arXiv Detail & Related papers (2022-04-01T08:07:41Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z) - A game-theoretic analysis of networked system control for common-pool
resource management using multi-agent reinforcement learning [54.55119659523629]
Multi-agent reinforcement learning has recently shown great promise as an approach to networked system control.
Common-pool resources include arable land, fresh water, wetlands, wildlife, fish stock, forests and the atmosphere.
arXiv Detail & Related papers (2020-10-15T14:12:26Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.