Vector Symbolic Architectures as a Computing Framework for Emerging
Hardware
- URL: http://arxiv.org/abs/2106.05268v2
- Date: Thu, 20 Jul 2023 16:47:20 GMT
- Title: Vector Symbolic Architectures as a Computing Framework for Emerging
Hardware
- Authors: Denis Kleyko, Mike Davies, E. Paxon Frady, Pentti Kanerva, Spencer J.
Kent, Bruno A. Olshausen, Evgeny Osipov, Jan M. Rabaey, Dmitri A.
Rachkovskij, Abbas Rahimi, Friedrich T. Sommer
- Abstract summary: This article reviews recent progress in the development of the computing framework vector symbolic architectures (VSA) (also known as hyperdimensional computing)
We demonstrate that VSA offers simple but powerful operations on high-dimensional vectors that can support all data structures and manipulations relevant to modern computing.
This article serves as a reference for computer architects by illustrating the philosophy behind VSA, techniques of distributed computing with them, and their relevance to emerging computing hardware.
- Score: 8.28931204639352
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This article reviews recent progress in the development of the computing
framework vector symbolic architectures (VSA) (also known as hyperdimensional
computing). This framework is well suited for implementation in stochastic,
emerging hardware, and it naturally expresses the types of cognitive operations
required for artificial intelligence (AI). We demonstrate in this article that
the field-like algebraic structure of VSA offers simple but powerful operations
on high-dimensional vectors that can support all data structures and
manipulations relevant to modern computing. In addition, we illustrate the
distinguishing feature of VSA, "computing in superposition," which sets it
apart from conventional computing. It also opens the door to efficient
solutions to the difficult combinatorial search problems inherent in AI
applications. We sketch ways of demonstrating that VSA are computationally
universal. We see them acting as a framework for computing with distributed
representations that can play a role of an abstraction layer for emerging
computing hardware. This article serves as a reference for computer architects
by illustrating the philosophy behind VSA, techniques of distributed computing
with them, and their relevance to emerging computing hardware, such as
neuromorphic computing.
Related papers
- shapiq: Shapley Interactions for Machine Learning [21.939393765684827]
We introduce shapiq, an open-source Python package that unifies state-of-the-art algorithms to efficiently compute Shapley Value (SV) and Shapley Interactions (SIs)
For practitioners, shapiq is able to explain and visualize any-order feature interactions in predictions of models, including vision transformers, language models, as well as XGBoost and LightGBM with TreeShap-IQ.
arXiv Detail & Related papers (2024-10-02T15:16:53Z) - Using the Abstract Computer Architecture Description Language to Model
AI Hardware Accelerators [77.89070422157178]
Manufacturers of AI-integrated products face a critical challenge: selecting an accelerator that aligns with their product's performance requirements.
The Abstract Computer Architecture Description Language (ACADL) is a concise formalization of computer architecture block diagrams.
In this paper, we demonstrate how to use the ACADL to model AI hardware accelerators, use their ACADL description to map DNNs onto them, and explain the timing simulation semantics to gather performance results.
arXiv Detail & Related papers (2024-01-30T19:27:16Z) - Probabilistic Abduction for Visual Abstract Reasoning via Learning Rules
in Vector-symbolic Architectures [22.12114509953737]
Abstract reasoning is a cornerstone of human intelligence, and replicating it with artificial intelligence (AI) presents an ongoing challenge.
This study focuses on efficiently solving Raven's progressive matrices (RPM), a visual test for assessing abstract reasoning abilities.
Instead of hard-coding the rule formulations associated with RPMs, our approach can learn the VSA rule formulations with just one pass through the training data.
arXiv Detail & Related papers (2024-01-29T10:17:18Z) - Computing with Residue Numbers in High-Dimensional Representation [7.736925756277564]
We introduce Residue Hyperdimensional Computing, a computing framework that unifies residue number systems with an algebra defined over random, high-dimensional vectors.
We show how residue numbers can be represented as high-dimensional vectors in a manner that allows algebraic operations to be performed with component-wise, parallelizable operations on the vector elements.
arXiv Detail & Related papers (2023-11-08T18:19:45Z) - Harnessing Deep Learning and HPC Kernels via High-Level Loop and Tensor Abstractions on CPU Architectures [67.47328776279204]
This work introduces a framework to develop efficient, portable Deep Learning and High Performance Computing kernels.
We decompose the kernel development in two steps: 1) Expressing the computational core using Processing Primitives (TPPs) and 2) Expressing the logical loops around TPPs in a high-level, declarative fashion.
We demonstrate the efficacy of our approach using standalone kernels and end-to-end workloads that outperform state-of-the-art implementations on diverse CPU platforms.
arXiv Detail & Related papers (2023-04-25T05:04:44Z) - The Basis of Design Tools for Quantum Computing: Arrays, Decision
Diagrams, Tensor Networks, and ZX-Calculus [55.58528469973086]
Quantum computers promise to efficiently solve important problems classical computers never will.
A fully automated quantum software stack needs to be developed.
This work provides a look "under the hood" of today's tools and showcases how these means are utilized in them, e.g., for simulation, compilation, and verification of quantum circuits.
arXiv Detail & Related papers (2023-01-10T19:00:00Z) - HyperSeed: Unsupervised Learning with Vector Symbolic Architectures [5.258404928739212]
This paper presents a novel unsupervised machine learning approach named Hyperseed.
It leverages Vector Symbolic Architectures (VSA) for fast learning a topology preserving feature map of unlabelled data.
The two distinctive novelties of the Hyperseed algorithm are 1) Learning from only few input data samples and 2) A learning rule based on a single vector operation.
arXiv Detail & Related papers (2021-10-15T20:05:43Z) - High-performance symbolic-numerics via multiple dispatch [52.77024349608834]
Symbolics.jl is an extendable symbolic system which uses dynamic multiple dispatch to change behavior depending on the domain needs.
We show that by formalizing a generic API on actions independent of implementation, we can retroactively add optimized data structures to our system.
We demonstrate the ability to swap between classical term-rewriting simplifiers and e-graph-based term-rewriting simplifiers.
arXiv Detail & Related papers (2021-05-09T14:22:43Z) - Towards an Interface Description Template for AI-enabled Systems [77.34726150561087]
Reuse is a common system architecture approach that seeks to instantiate a system architecture with existing components.
There is currently no framework that guides the selection of necessary information to assess their portability to operate in a system different than the one for which the component was originally purposed.
We present ongoing work on establishing an interface description template that captures the main information of an AI-enabled component.
arXiv Detail & Related papers (2020-07-13T20:30:26Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z) - Near-Optimal Hardware Design for Convolutional Neural Networks [0.0]
This study proposes a novel, special-purpose, and high-efficiency hardware architecture for convolutional neural networks.
The proposed architecture maximizes the utilization of multipliers by designing the computational circuit with the same structure as that of the computational flow of the model.
An implementation based on the proposed hardware architecture has been applied in commercial AI products.
arXiv Detail & Related papers (2020-02-06T09:15:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.