Training Deep Boltzmann Networks with Sparse Ising Machines
- URL: http://arxiv.org/abs/2303.10728v2
- Date: Tue, 23 Jan 2024 23:31:36 GMT
- Title: Training Deep Boltzmann Networks with Sparse Ising Machines
- Authors: Shaila Niazi, Navid Anjum Aadit, Masoud Mohseni, Shuvro Chowdhury, Yao
Qin, and Kerem Y. Camsari
- Abstract summary: We show a new application domain for probabilistic bit (p-bit) based Ising machines by training deep generative AI models with them.
Using sparse, asynchronous, and massively parallel Ising machines we train deep Boltzmann networks in a hybrid probabilistic-classical computing setup.
- Score: 5.048818298702389
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The slowing down of Moore's law has driven the development of unconventional
computing paradigms, such as specialized Ising machines tailored to solve
combinatorial optimization problems. In this paper, we show a new application
domain for probabilistic bit (p-bit) based Ising machines by training deep
generative AI models with them. Using sparse, asynchronous, and massively
parallel Ising machines we train deep Boltzmann networks in a hybrid
probabilistic-classical computing setup. We use the full MNIST and Fashion
MNIST (FMNIST) dataset without any downsampling and a reduced version of
CIFAR-10 dataset in hardware-aware network topologies implemented in moderately
sized Field Programmable Gate Arrays (FPGA). For MNIST, our machine using only
4,264 nodes (p-bits) and about 30,000 parameters achieves the same
classification accuracy (90%) as an optimized software-based restricted
Boltzmann Machine (RBM) with approximately 3.25 million parameters. Similar
results follow for FMNIST and CIFAR-10. Additionally, the sparse deep Boltzmann
network can generate new handwritten digits and fashion products, a task the
3.25 million parameter RBM fails at despite achieving the same accuracy. Our
hybrid computer takes a measured 50 to 64 billion probabilistic flips per
second, which is at least an order of magnitude faster than superficially
similar Graphics and Tensor Processing Unit (GPU/TPU) based implementations.
The massively parallel architecture can comfortably perform the contrastive
divergence algorithm (CD-n) with up to n = 10 million sweeps per update, beyond
the capabilities of existing software implementations. These results
demonstrate the potential of using Ising machines for traditionally
hard-to-train deep generative Boltzmann networks, with further possible
improvement in nanodevice-based realizations.
Related papers
- MCU-MixQ: A HW/SW Co-optimized Mixed-precision Neural Network Design Framework for MCUs [9.719789698194154]
Mixed-precision neural network (MPNN) that utilizes just enough data width for the neural network processing is an effective approach to meet the stringent resources constraints.
However, there is still a lack of sub-byte and mixed-precision SIMD operations in MCU-class ISA.
In this work, we propose to pack multiple low-bitwidth arithmetic operations within a single instruction multiple data (SIMD) instructions in typical MCUs.
arXiv Detail & Related papers (2024-07-17T14:51:15Z) - Mean-Field Assisted Deep Boltzmann Learning with Probabilistic Computers [0.0]
We show that deep and unrestricted Boltzmann Machines can be trained using p-computers generating hundreds of billions of Markov Chain Monte Carlo samples per second.
A custom Field-Programmable-Gate Array (FPGA) emulation of the p-computer architecture takes up to 45 billion flips per second.
Our algorithm can be used in other scalable Ising machines and its variants can be used to train BMs, previously thought to be intractable.
arXiv Detail & Related papers (2024-01-03T22:19:57Z) - All-to-all reconfigurability with sparse and higher-order Ising machines [0.0]
We introduce a multiplexed architecture that emulates all-to-all network functionality.
We show that running the adaptive parallel tempering algorithm demonstrates competitive algorithmic and prefactor advantages.
scaled magnetic versions of p-bit IMs could lead to orders of magnitude improvements over the state of the art for generic optimization.
arXiv Detail & Related papers (2023-11-21T20:27:02Z) - Tetra-AML: Automatic Machine Learning via Tensor Networks [0.0]
We introduce the Tetra-AML toolbox, which automates neural architecture search and hyperparameter optimization.
The toolbox also provides model compression through quantization and pruning, augmented by compression using tensor networks.
Here, we analyze a unified benchmark for optimizing neural networks in computer vision tasks and show the superior performance of our approach.
arXiv Detail & Related papers (2023-03-28T12:56:54Z) - An Adaptive Device-Edge Co-Inference Framework Based on Soft
Actor-Critic [72.35307086274912]
High-dimension parameter model and large-scale mathematical calculation restrict execution efficiency, especially for Internet of Things (IoT) devices.
We propose a new Deep Reinforcement Learning (DRL)-Soft Actor Critic for discrete (SAC-d), which generates the emphexit point, emphexit point, and emphcompressing bits by soft policy iterations.
Based on the latency and accuracy aware reward design, such an computation can well adapt to the complex environment like dynamic wireless channel and arbitrary processing, and is capable of supporting the 5G URL
arXiv Detail & Related papers (2022-01-09T09:31:50Z) - DS-Net++: Dynamic Weight Slicing for Efficient Inference in CNNs and
Transformers [105.74546828182834]
We show a hardware-efficient dynamic inference regime, named dynamic weight slicing, which adaptively slice a part of network parameters for inputs with diverse difficulty levels.
We present dynamic slimmable network (DS-Net) and dynamic slice-able network (DS-Net++) by input-dependently adjusting filter numbers of CNNs and multiple dimensions in both CNNs and transformers.
arXiv Detail & Related papers (2021-09-21T09:57:21Z) - OMPQ: Orthogonal Mixed Precision Quantization [64.59700856607017]
Mixed precision quantization takes advantage of hardware's multiple bit-width arithmetic operations to unleash the full potential of network quantization.
We propose to optimize a proxy metric, the concept of networkity, which is highly correlated with the loss of the integer programming.
This approach reduces the search time and required data amount by orders of magnitude, with little compromise on quantization accuracy.
arXiv Detail & Related papers (2021-09-16T10:59:33Z) - Quantized Neural Networks via {-1, +1} Encoding Decomposition and
Acceleration [83.84684675841167]
We propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks.
We validate the effectiveness of our method on large-scale image classification, object detection, and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-18T03:11:15Z) - SME: ReRAM-based Sparse-Multiplication-Engine to Squeeze-Out Bit
Sparsity of Neural Network [18.79036546647254]
We develop a novel ReRAM-based deep neural network (DNN) accelerator, named Sparse-Multiplication-Engine (SME)
First, we orchestrate the bit-sparse pattern to increase the density of bit-sparsity based on existing quantization methods.
Second, we propose a novel weigh mapping mechanism to slice the bits of a weight across the crossbars and splice the activation results in peripheral circuits.
Third, a superior squeeze-out scheme empties the crossbars mapped with highly-sparse non-zeros from the previous two steps.
arXiv Detail & Related papers (2021-03-02T13:27:15Z) - Einsum Networks: Fast and Scalable Learning of Tractable Probabilistic
Circuits [99.59941892183454]
We propose Einsum Networks (EiNets), a novel implementation design for PCs.
At their core, EiNets combine a large number of arithmetic operations in a single monolithic einsum-operation.
We show that the implementation of Expectation-Maximization (EM) can be simplified for PCs, by leveraging automatic differentiation.
arXiv Detail & Related papers (2020-04-13T23:09:15Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.