Deep Reinforcement Learning for System-on-Chip: Myths and Realities
- URL: http://arxiv.org/abs/2207.14595v1
- Date: Fri, 29 Jul 2022 10:26:38 GMT
- Title: Deep Reinforcement Learning for System-on-Chip: Myths and Realities
- Authors: Tegg Taekyong Sung, Bo Ryu
- Abstract summary: We investigate the feasibility of neural schedulers for the domain of System-on-Chip (SoC) resource allocation.
Our novel neural scheduler technique, Eclectic Interaction Matching (EIM), overcomes the above challenges.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Neural schedulers based on deep reinforcement learning (DRL) have shown
considerable potential for solving real-world resource allocation problems, as
they have demonstrated significant performance gain in the domain of cluster
computing. In this paper, we investigate the feasibility of neural schedulers
for the domain of System-on-Chip (SoC) resource allocation through extensive
experiments and comparison with non-neural, heuristic schedulers. The key
finding is three-fold. First, neural schedulers designed for cluster computing
domain do not work well for SoC due to i) heterogeneity of SoC computing
resources and ii) variable action set caused by randomness in incoming jobs.
Second, our novel neural scheduler technique, Eclectic Interaction Matching
(EIM), overcomes the above challenges, thus significantly improving the
existing neural schedulers. Specifically, we rationalize the underlying reasons
behind the performance gain by the EIM-based neural scheduler. Third, we
discover that the ratio of the average processing elements (PE) switching delay
and the average PE computation time significantly impacts the performance of
neural SoC schedulers even with EIM. Consequently, future neural SoC scheduler
design must consider this metric as well as its implementation overhead for
practical utility.
Related papers
- Temporal Spiking Neural Networks with Synaptic Delay for Graph Reasoning [91.29876772547348]
Spiking neural networks (SNNs) are investigated as biologically inspired models of neural computation.
This paper reveals that SNNs, when amalgamated with synaptic delay and temporal coding, are proficient in executing (knowledge) graph reasoning.
arXiv Detail & Related papers (2024-05-27T05:53:30Z) - Switchable Decision: Dynamic Neural Generation Networks [98.61113699324429]
We propose a switchable decision to accelerate inference by dynamically assigning resources for each data instance.
Our method benefits from less cost during inference while keeping the same accuracy.
arXiv Detail & Related papers (2024-05-07T17:44:54Z) - Stochastic Spiking Neural Networks with First-to-Spike Coding [7.955633422160267]
Spiking Neural Networks (SNNs) are known for their bio-plausibility and energy efficiency.
In this work, we explore the merger of novel computing and information encoding schemes in SNN architectures.
We investigate the tradeoffs of our proposal in terms of accuracy, inference latency, spiking sparsity, energy consumption, and datasets.
arXiv Detail & Related papers (2024-04-26T22:52:23Z) - GRSN: Gated Recurrent Spiking Neurons for POMDPs and MARL [28.948871773551854]
Spiking neural networks (SNNs) are widely applied in various fields due to their energy-efficient and fast-inference capabilities.
In current spiking reinforcement learning (SRL) algorithms, the simulation results of multiple time steps can only correspond to a single-step decision in RL.
We propose a novel temporal alignment paradigm (TAP) that leverages the single-step update of spiking neurons to accumulate historical state information in RL.
arXiv Detail & Related papers (2024-04-24T02:20:50Z) - Knowledge Enhanced Conditional Imputation for Healthcare Time-series [9.937117045677923]
Conditional Self-Attention Imputation (CSAI) is a novel recurrent neural network architecture designed to address the challenges of complex missing data patterns.
CSAI extends the current state-of-the-art neural network-based imputation methods by introducing key modifications specifically adapted to EHR data characteristics.
This work significantly advances the state of neural network imputation applied to EHRs by more closely aligning algorithmic imputation with clinical realities.
arXiv Detail & Related papers (2023-12-27T20:42:40Z) - Sparse Multitask Learning for Efficient Neural Representation of Motor
Imagery and Execution [30.186917337606477]
We introduce a sparse multitask learning framework for motor imagery (MI) and motor execution (ME) tasks.
Given a dual-task CNN model for MI-ME classification, we apply a saliency-based sparsification approach to prune superfluous connections.
Our results indicate that this tailored sparsity can mitigate the overfitting problem and improve the test performance with small amount of data.
arXiv Detail & Related papers (2023-12-10T09:06:16Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - DeepSoCS: A Neural Scheduler for Heterogeneous System-on-Chip (SoC)
Resource Scheduling [0.0]
We present a novel scheduling solution for a class of System-on-Chip (SoC) systems.
Our Deep Reinforcement Learning (DRL)-based Scheduler (DeepSoCS) overcomes the brittleness of rule-based schedulers.
arXiv Detail & Related papers (2020-05-15T17:31:27Z) - Recurrent Neural Network Learning of Performance and Intrinsic
Population Dynamics from Sparse Neural Data [77.92736596690297]
We introduce a novel training strategy that allows learning not only the input-output behavior of an RNN but also its internal network dynamics.
We test the proposed method by training an RNN to simultaneously reproduce internal dynamics and output signals of a physiologically-inspired neural model.
Remarkably, we show that the reproduction of the internal dynamics is successful even when the training algorithm relies on the activities of a small subset of neurons.
arXiv Detail & Related papers (2020-05-05T14:16:54Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.