Feature Visualization within an Automated Design Assessment leveraging
Explainable Artificial Intelligence Methods
- URL: http://arxiv.org/abs/2201.12107v1
- Date: Fri, 28 Jan 2022 13:31:42 GMT
- Title: Feature Visualization within an Automated Design Assessment leveraging
Explainable Artificial Intelligence Methods
- Authors: Raoul Sch\"onhof and Artem Werner and Jannes Elstner and Boldizsar
Zopcsak and Ramez Awad and Marco Huber
- Abstract summary: Automated capability assessment, mainly leveraged by deep learning systems driven from 3D CAD data, have been presented.
Current assessment systems may be able to assess CAD data with regards to abstract features, but without any geometrical indicator about the reasons of the system's decision.
Within the NeuroCAD Project, xAI methods are used to identify geometrical features which are associated with a certain abstract feature.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Not only automation of manufacturing processes but also automation of
automation procedures itself become increasingly relevant to automation
research. In this context, automated capability assessment, mainly leveraged by
deep learning systems driven from 3D CAD data, have been presented. Current
assessment systems may be able to assess CAD data with regards to abstract
features, e.g. the ability to automatically separate components from bulk
goods, or the presence of gripping surfaces. Nevertheless, they suffer from the
factor of black box systems, where an assessment can be learned and generated
easily, but without any geometrical indicator about the reasons of the system's
decision. By utilizing explainable AI (xAI) methods, we attempt to open up the
black box. Explainable AI methods have been used in order to assess whether a
neural network has successfully learned a given task or to analyze which
features of an input might lead to an adversarial attack. These methods aim to
derive additional insights into a neural network, by analyzing patterns from a
given input and its impact to the network output. Within the NeuroCAD Project,
xAI methods are used to identify geometrical features which are associated with
a certain abstract feature. Within this work, a sensitivity analysis (SA), the
layer-wise relevance propagation (LRP), the Gradient-weighted Class Activation
Mapping (Grad-CAM) method as well as the Local Interpretable Model-Agnostic
Explanations (LIME) have been implemented in the NeuroCAD environment, allowing
not only to assess CAD models but also to identify features which have been
relevant for the network decision. In the medium run, this might enable to
identify regions of interest supporting product designers to optimize their
models with regards to assembly processes.
Related papers
- AI-Aided Kalman Filters [65.35350122917914]
The Kalman filter (KF) and its variants are among the most celebrated algorithms in signal processing.
Recent developments illustrate the possibility of fusing deep neural networks (DNNs) with classic Kalman-type filtering.
This article provides a tutorial-style overview of design approaches for incorporating AI in aiding KF-type algorithms.
arXiv Detail & Related papers (2024-10-16T06:47:53Z) - On the Road to Clarity: Exploring Explainable AI for World Models in a Driver Assistance System [3.13366804259509]
We build a transparent backbone model for convolutional variational autoencoders (VAE)
We propose explanation and evaluation techniques for the internal dynamics and feature relevance of prediction networks.
We showcase our methods by analyzing a VAE-LSTM world model that predicts pedestrian perception in an urban traffic situation.
arXiv Detail & Related papers (2024-04-26T11:57:17Z) - Representing Timed Automata and Timing Anomalies of Cyber-Physical
Production Systems in Knowledge Graphs [51.98400002538092]
This paper aims to improve model-based anomaly detection in CPPS by combining the learned timed automaton with a formal knowledge graph about the system.
Both the model and the detected anomalies are described in the knowledge graph in order to allow operators an easier interpretation of the model and the detected anomalies.
arXiv Detail & Related papers (2023-08-25T15:25:57Z) - Planning for Learning Object Properties [117.27898922118946]
We formalize the problem of automatically training a neural network to recognize object properties as a symbolic planning problem.
We use planning techniques to produce a strategy for automating the training dataset creation and the learning process.
We provide an experimental evaluation in both a simulated and a real environment.
arXiv Detail & Related papers (2023-01-15T09:37:55Z) - Towards automated Capability Assessment leveraging Deep Learning [0.0]
This paper presents NeuroCAD, a software tool that automates the assessment using voxelization techniques.
The approach enables the assessment of abstract geometries and production relevant features through deep-learning based on CAD files.
arXiv Detail & Related papers (2022-01-28T13:49:35Z) - Multi-Branch Deep Radial Basis Function Networks for Facial Emotion
Recognition [80.35852245488043]
We propose a CNN based architecture enhanced with multiple branches formed by radial basis function (RBF) units.
RBF units capture local patterns shared by similar instances using an intermediate representation.
We show it is the incorporation of local information what makes the proposed model competitive.
arXiv Detail & Related papers (2021-09-07T21:05:56Z) - Sensitivity analysis in differentially private machine learning using
hybrid automatic differentiation [54.88777449903538]
We introduce a novel textithybrid automatic differentiation (AD) system for sensitivity analysis.
This enables modelling the sensitivity of arbitrary differentiable function compositions, such as the training of neural networks on private data.
Our approach can enable the principled reasoning about privacy loss in the setting of data processing.
arXiv Detail & Related papers (2021-07-09T07:19:23Z) - Transforming Feature Space to Interpret Machine Learning Models [91.62936410696409]
This contribution proposes a novel approach that interprets machine-learning models through the lens of feature space transformations.
It can be used to enhance unconditional as well as conditional post-hoc diagnostic tools.
A case study on remote-sensing landcover classification with 46 features is used to demonstrate the potential of the proposed approach.
arXiv Detail & Related papers (2021-04-09T10:48:11Z) - Symbolic AI for XAI: Evaluating LFIT Inductive Programming for Fair and
Explainable Automatic Recruitment [11.460075612587591]
We propose an ILP technique that can learn a propositional logic theory equivalent to a given black-box system.
We show the expressiveness of LFIT for this specific problem and propose a scheme that can be applicable to other domains.
arXiv Detail & Related papers (2020-12-01T09:36:59Z) - What Do You See? Evaluation of Explainable Artificial Intelligence (XAI)
Interpretability through Neural Backdoors [15.211935029680879]
EXplainable AI (XAI) methods have been proposed to interpret how a deep neural network predicts inputs.
Current evaluation approaches either require subjective input from humans or incur high computation cost with automated evaluation.
We propose backdoor trigger patterns--hidden malicious functionalities that cause misclassification--to automate the evaluation of saliency explanations.
arXiv Detail & Related papers (2020-09-22T15:53:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.