Neuro-symbolic Architectures for Context Understanding
- URL: http://arxiv.org/abs/2003.04707v1
- Date: Mon, 9 Mar 2020 15:04:07 GMT
- Title: Neuro-symbolic Architectures for Context Understanding
- Authors: Alessandro Oltramari, Jonathan Francis, Cory Henson, Kaixin Ma, and
Ruwan Wickramarachchi
- Abstract summary: We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
- Score: 59.899606495602406
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Computational context understanding refers to an agent's ability to fuse
disparate sources of information for decision-making and is, therefore,
generally regarded as a prerequisite for sophisticated machine reasoning
capabilities, such as in artificial intelligence (AI). Data-driven and
knowledge-driven methods are two classical techniques in the pursuit of such
machine sense-making capability. However, while data-driven methods seek to
model the statistical regularities of events by making observations in the
real-world, they remain difficult to interpret and they lack mechanisms for
naturally incorporating external knowledge. Conversely, knowledge-driven
methods, combine structured knowledge bases, perform symbolic reasoning based
on axiomatic principles, and are more interpretable in their inferential
processing; however, they often lack the ability to estimate the statistical
salience of an inference. To combat these issues, we propose the use of hybrid
AI methodology as a general framework for combining the strengths of both
approaches. Specifically, we inherit the concept of neuro-symbolism as a way of
using knowledge-bases to guide the learning progress of deep neural networks.
We further ground our discussion in two applications of neuro-symbolism and, in
both cases, show that our systems maintain interpretability while achieving
comparable performance, relative to the state-of-the-art.
Related papers
- Improving deep learning with prior knowledge and cognitive models: A
survey on enhancing explainability, adversarial robustness and zero-shot
learning [0.0]
We review current and emerging knowledge-informed and brain-inspired cognitive systems for realizing adversarial defenses.
Brain-inspired cognition methods use computational models that mimic the human mind to enhance intelligent behavior in artificial agents and autonomous robots.
arXiv Detail & Related papers (2024-03-11T18:11:00Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Discrete, compositional, and symbolic representations through attractor dynamics [51.20712945239422]
We introduce a novel neural systems model that integrates attractor dynamics with symbolic representations to model cognitive processes akin to the probabilistic language of thought (PLoT)
Our model segments the continuous representational space into discrete basins, with attractor states corresponding to symbolic sequences, that reflect the semanticity and compositionality characteristic of symbolic systems through unsupervised learning, rather than relying on pre-defined primitives.
This approach establishes a unified framework that integrates both symbolic and sub-symbolic processing through neural dynamics, a neuroplausible substrate with proven expressivity in AI, offering a more comprehensive model that mirrors the complex duality of cognitive operations
arXiv Detail & Related papers (2023-10-03T05:40:56Z) - A Novel Neural-symbolic System under Statistical Relational Learning [50.747658038910565]
We propose a general bi-level probabilistic graphical reasoning framework called GBPGR.
In GBPGR, the results of symbolic reasoning are utilized to refine and correct the predictions made by the deep learning models.
Our approach achieves high performance and exhibits effective generalization in both transductive and inductive tasks.
arXiv Detail & Related papers (2023-09-16T09:15:37Z) - Towards Benchmarking Explainable Artificial Intelligence Methods [0.0]
We use philosophy of science theories as an analytical lens with the goal of revealing, what can be expected, and more importantly, not expected, from methods that aim to explain decisions promoted by a neural network.
By conducting a case study we investigate a selection of explainability method's performance over two mundane domains, animals and headgear.
We lay bare that the usefulness of these methods relies on human domain knowledge and our ability to understand, generalise and reason.
arXiv Detail & Related papers (2022-08-25T14:28:30Z) - A Concept and Argumentation based Interpretable Model in High Risk
Domains [9.209499864585688]
Interpretability has become an essential topic for artificial intelligence in high-risk domains such as healthcare, bank and security.
We propose a concept and argumentation based model (CAM) that includes a novel concept mining method to obtain human understandable concepts.
CAM provides decisions that are based on human-level knowledge and the reasoning process is intrinsically interpretable.
arXiv Detail & Related papers (2022-08-17T08:29:02Z) - A Data-Driven Study of Commonsense Knowledge using the ConceptNet
Knowledge Base [8.591839265985412]
Acquiring commonsense knowledge and reasoning is recognized as an important frontier in achieving general Artificial Intelligence (AI)
In this paper, we propose and conduct a systematic study to enable a deeper understanding of commonsense knowledge by doing an empirical and structural analysis of the ConceptNet knowledge base.
Detailed experimental results on three carefully designed research questions, using state-of-the-art unsupervised graph representation learning ('embedding') and clustering techniques, reveal deep substructures in ConceptNet relations.
arXiv Detail & Related papers (2020-11-28T08:08:25Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.