Taxonomic Networks: A Representation for Neuro-Symbolic Pairing
- URL: http://arxiv.org/abs/2505.24601v1
- Date: Fri, 30 May 2025 13:48:34 GMT
- Title: Taxonomic Networks: A Representation for Neuro-Symbolic Pairing
- Authors: Zekun Wang, Ethan L. Haarer, Nicki Barari, Christopher J. MacLellan,
- Abstract summary: We introduce the concept of a textbfneuro-symbolic pair -- neural and symbolic approaches linked through a common knowledge representation.<n>We show that our symbolic method learns taxonomic nets more efficiently with less data and compute, while the neural method finds higher-accuracy taxonomic nets when provided with greater resources.
- Score: 1.0860174751082887
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce the concept of a \textbf{neuro-symbolic pair} -- neural and symbolic approaches that are linked through a common knowledge representation. Next, we present \textbf{taxonomic networks}, a type of discrimination network in which nodes represent hierarchically organized taxonomic concepts. Using this representation, we construct a novel neuro-symbolic pair and evaluate its performance. We show that our symbolic method learns taxonomic nets more efficiently with less data and compute, while the neural method finds higher-accuracy taxonomic nets when provided with greater resources. As a neuro-symbolic pair, these approaches can be used interchangeably based on situational needs, with seamless translation between them when necessary. This work lays the foundation for future systems that more fundamentally integrate neural and symbolic computation.
Related papers
- Compositional Generalization Across Distributional Shifts with Sparse Tree Operations [77.5742801509364]
We introduce a unified neurosymbolic architecture called the Differentiable Tree Machine.<n>We significantly increase the model's efficiency through the use of sparse vector representations of symbolic structures.<n>We enable its application beyond the restricted set of tree2tree problems to the more general class of seq2seq problems.
arXiv Detail & Related papers (2024-12-18T17:20:19Z) - Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - A short Survey: Exploring knowledge graph-based neural-symbolic system from application perspective [0.0]
achieving human-like reasoning and interpretability in AI systems remains a substantial challenge.<n>The Neural-Symbolic paradigm, which integrates neural networks with symbolic systems, presents a promising pathway toward more interpretable AI.<n>This paper explores recent advancements in neural-symbolic integration based on Knowledge Graphs.
arXiv Detail & Related papers (2024-05-06T14:40:50Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Improving Neural-based Classification with Logical Background Knowledge [0.0]
We propose a new formalism for supervised multi-label classification with propositional background knowledge.
We introduce a new neurosymbolic technique called semantic conditioning at inference.
We discuss its theoritical and practical advantages over two other popular neurosymbolic techniques.
arXiv Detail & Related papers (2024-02-20T14:01:26Z) - NeuralFastLAS: Fast Logic-Based Learning from Raw Data [54.938128496934695]
Symbolic rule learners generate interpretable solutions, however they require the input to be encoded symbolically.
Neuro-symbolic approaches overcome this issue by mapping raw data to latent symbolic concepts using a neural network.
We introduce NeuralFastLAS, a scalable and fast end-to-end approach that trains a neural network jointly with a symbolic learner.
arXiv Detail & Related papers (2023-10-08T12:33:42Z) - Discrete, compositional, and symbolic representations through attractor dynamics [51.20712945239422]
We introduce a novel neural systems model that integrates attractor dynamics with symbolic representations to model cognitive processes akin to the probabilistic language of thought (PLoT)
Our model segments the continuous representational space into discrete basins, with attractor states corresponding to symbolic sequences, that reflect the semanticity and compositionality characteristic of symbolic systems through unsupervised learning, rather than relying on pre-defined primitives.
This approach establishes a unified framework that integrates both symbolic and sub-symbolic processing through neural dynamics, a neuroplausible substrate with proven expressivity in AI, offering a more comprehensive model that mirrors the complex duality of cognitive operations
arXiv Detail & Related papers (2023-10-03T05:40:56Z) - A Recursive Bateson-Inspired Model for the Generation of Semantic Formal
Concepts from Spatial Sensory Data [77.34726150561087]
This paper presents a new symbolic-only method for the generation of hierarchical concept structures from complex sensory data.
The approach is based on Bateson's notion of difference as the key to the genesis of an idea or a concept.
The model is able to produce fairly rich yet human-readable conceptual representations without training.
arXiv Detail & Related papers (2023-07-16T15:59:13Z) - Extensions to Generalized Annotated Logic and an Equivalent Neural
Architecture [4.855957436171202]
We propose a list of desirable criteria for neuro symbolic systems and examine how some of the existing approaches address these criteria.
We then propose an extension to annotated generalized logic that allows for the creation of an equivalent neural architecture.
Unlike previous approaches that rely on continuous optimization for the training process, our framework is designed as a binarized neural network that uses discrete optimization.
arXiv Detail & Related papers (2023-02-23T17:39:46Z) - A Semantic Framework for Neuro-Symbolic Computing [0.36832029288386137]
The field of neuro-symbolic AI aims to benefit from the combination of neural networks and symbolic systems.<n>No common definition of encoding exists that can enable a precise, theoretical comparison of neuro-symbolic methods.<n>This paper addresses this problem by introducing a semantic framework for neuro-symbolic AI.
arXiv Detail & Related papers (2022-12-22T22:00:58Z) - Neuro-symbolic computing with spiking neural networks [0.6035125735474387]
We extend previous work on spike-based graph algorithms by demonstrating how symbolic and multi-relational information can be encoded using spiking neurons.
The introduced framework is enabled by combining the graph embedding paradigm and the recent progress in training spiking neural networks using error backpropagation.
arXiv Detail & Related papers (2022-08-04T10:49:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.