Neuro-Symbolic AI: An Emerging Class of AI Workloads and their
Characterization
- URL: http://arxiv.org/abs/2109.06133v1
- Date: Mon, 13 Sep 2021 17:19:59 GMT
- Title: Neuro-Symbolic AI: An Emerging Class of AI Workloads and their
Characterization
- Authors: Zachary Susskind, Bryce Arden, Lizy K. John, Patrick Stockton, and
Eugene B. John
- Abstract summary: Neuro-symbolic artificial intelligence is a novel area of AI research.
We describe and analyze the performance characteristics of three recent neuro-symbolic models.
- Score: 0.9949801888214526
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neuro-symbolic artificial intelligence is a novel area of AI research which
seeks to combine traditional rules-based AI approaches with modern deep
learning techniques. Neuro-symbolic models have already demonstrated the
capability to outperform state-of-the-art deep learning models in domains such
as image and video reasoning. They have also been shown to obtain high accuracy
with significantly less training data than traditional models. Due to the
recency of the field's emergence and relative sparsity of published results,
the performance characteristics of these models are not well understood. In
this paper, we describe and analyze the performance characteristics of three
recent neuro-symbolic models. We find that symbolic models have less potential
parallelism than traditional neural models due to complex control flow and
low-operational-intensity operations, such as scalar multiplication and tensor
addition. However, the neural aspect of computation dominates the symbolic part
in cases where they are clearly separable. We also find that data movement
poses a potential bottleneck, as it does in many ML workloads.
Related papers
- Towards Efficient Neuro-Symbolic AI: From Workload Characterization to Hardware Architecture [22.274696991107206]
Neuro-symbolic AI emerges as a promising paradigm, fusing neural and symbolic approaches to enhance interpretability, robustness, and trustworthiness.
Recent neuro-symbolic systems have demonstrated great potential in collaborative human-AI scenarios with reasoning and cognitive capabilities.
We first systematically categorize neuro-symbolic AI algorithms, and then experimentally evaluate and analyze them in terms of runtime, memory, computational operators, sparsity, and system characteristics.
arXiv Detail & Related papers (2024-09-20T01:32:14Z) - Neural Dynamics Model of Visual Decision-Making: Learning from Human Experts [28.340344705437758]
We implement a comprehensive visual decision-making model that spans from visual input to behavioral output.
Our model aligns closely with human behavior and reflects neural activities in primates.
A neuroimaging-informed fine-tuning approach was introduced and applied to the model, leading to performance improvements.
arXiv Detail & Related papers (2024-09-04T02:38:52Z) - On the Trade-off Between Efficiency and Precision of Neural Abstraction [62.046646433536104]
Neural abstractions have been recently introduced as formal approximations of complex, nonlinear dynamical models.
We employ formal inductive synthesis procedures to generate neural abstractions that result in dynamical models with these semantics.
arXiv Detail & Related papers (2023-07-28T13:22:32Z) - Neurosymbolic AI and its Taxonomy: a survey [48.7576911714538]
Neurosymbolic AI deals with models that combine symbolic processing, like classic AI, and neural networks.
This survey investigates research papers in this area during recent years and brings classification and comparison between the presented models as well as applications.
arXiv Detail & Related papers (2023-05-12T19:51:13Z) - Interpretable statistical representations of neural population dynamics and geometry [4.459704414303749]
We introduce a representation learning method, MARBLE, that decomposes on-manifold dynamics into local flow fields and maps them into a common latent space.
In simulated non-linear dynamical systems, recurrent neural networks, and experimental single-neuron recordings from primates and rodents, we discover emergent low-dimensional latent representations.
These representations are consistent across neural networks and animals, enabling the robust comparison of cognitive computations.
arXiv Detail & Related papers (2023-04-06T21:11:04Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Mapping and Validating a Point Neuron Model on Intel's Neuromorphic
Hardware Loihi [77.34726150561087]
We investigate the potential of Intel's fifth generation neuromorphic chip - Loihi'
Loihi is based on the novel idea of Spiking Neural Networks (SNNs) emulating the neurons in the brain.
We find that Loihi replicates classical simulations very efficiently and scales notably well in terms of both time and energy performance as the networks get larger.
arXiv Detail & Related papers (2021-09-22T16:52:51Z) - On the Evolution of Neuron Communities in a Deep Learning Architecture [0.7106986689736827]
This paper examines the neuron activation patterns of deep learning-based classification models.
We show that both the community quality (modularity) and entropy are closely related to the deep learning models' performances.
arXiv Detail & Related papers (2021-06-08T21:09:55Z) - Neural Additive Models: Interpretable Machine Learning with Neural Nets [77.66871378302774]
Deep neural networks (DNNs) are powerful black-box predictors that have achieved impressive performance on a wide variety of tasks.
We propose Neural Additive Models (NAMs) which combine some of the expressivity of DNNs with the inherent intelligibility of generalized additive models.
NAMs learn a linear combination of neural networks that each attend to a single input feature.
arXiv Detail & Related papers (2020-04-29T01:28:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.