Flexible and Inherently Comprehensible Knowledge Representation for
Data-Efficient Learning and Trustworthy Human-Machine Teaming in
Manufacturing Environments
- URL: http://arxiv.org/abs/2305.11597v1
- Date: Fri, 19 May 2023 11:18:23 GMT
- Title: Flexible and Inherently Comprehensible Knowledge Representation for
Data-Efficient Learning and Trustworthy Human-Machine Teaming in
Manufacturing Environments
- Authors: Vedran Galeti\'c, Alistair Nottle
- Abstract summary: Trustworthiness of artificially intelligent agents is vital for the acceptance of human-machine teaming in industrial manufacturing environments.
We make use of G"ardenfors's cognitively inspired Conceptual Space framework to represent the agent's knowledge.
A simple typicality model is built on top of it to determine fuzzy category membership and classify instances interpretably.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Trustworthiness of artificially intelligent agents is vital for the
acceptance of human-machine teaming in industrial manufacturing environments.
Predictable behaviours and explainable (and understandable) rationale allow
humans collaborating with (and building) these agents to understand their
motivations and therefore validate decisions that are made. To that aim, we
make use of G\"ardenfors's cognitively inspired Conceptual Space framework to
represent the agent's knowledge using concepts as convex regions in a space
spanned by inherently comprehensible quality dimensions. A simple typicality
quantification model is built on top of it to determine fuzzy category
membership and classify instances interpretably. We apply it on a use case from
the manufacturing domain, using objects' physical properties obtained from
cobots' onboard sensors and utilisation properties from crowdsourced
commonsense knowledge available at public knowledge bases. Such flexible
knowledge representation based on property decomposition allows for
data-efficient representation learning of typically highly specialist or
specific manufacturing artefacts. In such a setting, traditional data-driven
(e.g., computer vision-based) classification approaches would struggle due to
training data scarcity. This allows for comprehensibility of an AI agent's
acquired knowledge by the human collaborator thus contributing to
trustworthiness. We situate our approach within an existing explainability
framework specifying explanation desiderata. We provide arguments for our
system's applicability and appropriateness for different roles of human agents
collaborating with the AI system throughout its design, validation, and
operation.
Related papers
- Understanding Generative AI Content with Embedding Models [4.662332573448995]
This work views the internal representations of modern deep neural networks (DNNs) as an automated form of traditional feature engineering.
We show that these embeddings can reveal interpretable, high-level concepts in unstructured sample data.
We find empirical evidence that there is inherent separability between real data and that generated from AI models.
arXiv Detail & Related papers (2024-08-19T22:07:05Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Planning for Learning Object Properties [117.27898922118946]
We formalize the problem of automatically training a neural network to recognize object properties as a symbolic planning problem.
We use planning techniques to produce a strategy for automating the training dataset creation and the learning process.
We provide an experimental evaluation in both a simulated and a real environment.
arXiv Detail & Related papers (2023-01-15T09:37:55Z) - Fusing Interpretable Knowledge of Neural Network Learning Agents For
Swarm-Guidance [0.5156484100374059]
Neural-based learning agents make decisions using internal artificial neural networks.
In certain situations, it becomes pertinent that this knowledge is re-interpreted in a friendly form to both the human and the machine.
We propose an interpretable knowledge fusion framework suited for neural-based learning agents, and propose a Priority on Weak State Areas (PoWSA) retraining technique.
arXiv Detail & Related papers (2022-04-01T08:07:41Z) - Knowledge-based XAI through CBR: There is more to explanations than
models can tell [0.0]
We propose to use domain knowledge to complement the data used by data-centric artificial intelligence agents.
We formulate knowledge-based explainable artificial intelligence as a supervised data classification problem aligned with the CBR methodology.
arXiv Detail & Related papers (2021-08-23T19:01:43Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z) - A Comparative Approach to Explainable Artificial Intelligence Methods in
Application to High-Dimensional Electronic Health Records: Examining the
Usability of XAI [0.0]
XAI aims to produce a demonstrative factor of trust, which for human subjects is achieved through communicative means.
The ideology behind trusting a machine to tend towards the livelihood of a human poses an ethical conundrum.
XAI methods produce visualization of the feature contribution towards a given models output on both a local and global level.
arXiv Detail & Related papers (2021-03-08T18:15:52Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.