Transformer Network-based Reinforcement Learning Method for Power
Distribution Network (PDN) Optimization of High Bandwidth Memory (HBM)
- URL: http://arxiv.org/abs/2203.15722v1
- Date: Tue, 29 Mar 2022 16:27:54 GMT
- Title: Transformer Network-based Reinforcement Learning Method for Power
Distribution Network (PDN) Optimization of High Bandwidth Memory (HBM)
- Authors: Hyunwook Park, Minsu Kim, Seongguk Kim, Keunwoo Kim, Haeyeon Kim,
Taein Shin, Keeyoung Son, Boogyo Sim, Subin Kim, Seungtaek Jeong, Chulsoon
Hwang, and Joungho Kim
- Abstract summary: We propose a transformer network-based reinforcement learning (RL) method for power distribution network (PDN) optimization of high bandwidth memory (HBM)
The proposed method can provide an optimal decoupling capacitor (decap) design to maximize the reduction of PDN self- and transfer seen at multiple ports.
- Score: 4.829921419076774
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this article, for the first time, we propose a transformer network-based
reinforcement learning (RL) method for power distribution network (PDN)
optimization of high bandwidth memory (HBM). The proposed method can provide an
optimal decoupling capacitor (decap) design to maximize the reduction of PDN
self- and transfer impedance seen at multiple ports. An attention-based
transformer network is implemented to directly parameterize decap optimization
policy. The optimality performance is significantly improved since the
attention mechanism has powerful expression to explore massive combinatorial
space for decap assignments. Moreover, it can capture sequential relationships
between the decap assignments. The computing time for optimization is
dramatically reduced due to the reusable network on positions of probing ports
and decap assignment candidates. This is because the transformer network has a
context embedding process to capture meta-features including probing ports
positions. In addition, the network is trained with randomly generated data
sets. Therefore, without additional training, the trained network can solve new
decap optimization problems. The computing time for training and data cost are
critically decreased due to the scalability of the network. Thanks to its
shared weight property, the network can adapt to a larger scale of problems
without additional training. For verification, we compare the results with
conventional genetic algorithm (GA), random search (RS), and all the previous
RL-based methods. As a result, the proposed method outperforms in all the
following aspects: optimality performance, computing time, and data efficiency.
Related papers
- Pruning By Explaining Revisited: Optimizing Attribution Methods to Prune CNNs and Transformers [14.756988176469365]
An effective approach to reduce computational requirements and increase efficiency is to prune unnecessary components of Deep Neural Networks.
Previous work has shown that attribution methods from the field of eXplainable AI serve as effective means to extract and prune the least relevant network components in a few-shot fashion.
arXiv Detail & Related papers (2024-08-22T17:35:18Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - Multi Agent DeepRL based Joint Power and Subchannel Allocation in IAB
networks [0.0]
Integrated Access and Backhauling (IRL) is a viable approach for meeting the unprecedented need for higher data rates of future generations.
In this paper, we show how we can use Deep Q-Learning Network to handle problems with huge action spaces associated with fractional nodes.
arXiv Detail & Related papers (2023-08-31T21:30:25Z) - Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - Learning k-Level Structured Sparse Neural Networks Using Group Envelope Regularization [4.0554893636822]
We introduce a novel approach to deploy large-scale Deep Neural Networks on constrained resources.
The method speeds up inference time and aims to reduce memory demand and power consumption.
arXiv Detail & Related papers (2022-12-25T15:40:05Z) - Trainability Preserving Neural Structured Pruning [64.65659982877891]
We present trainability preserving pruning (TPP), a regularization-based structured pruning method that can effectively maintain trainability during sparsification.
TPP can compete with the ground-truth dynamical isometry recovery method on linear networks.
It delivers encouraging performance in comparison to many top-performing filter pruning methods.
arXiv Detail & Related papers (2022-07-25T21:15:47Z) - Joint inference and input optimization in equilibrium networks [68.63726855991052]
deep equilibrium model is a class of models that foregoes traditional network depth and instead computes the output of a network by finding the fixed point of a single nonlinear layer.
We show that there is a natural synergy between these two settings.
We demonstrate this strategy on various tasks such as training generative models while optimizing over latent codes, training models for inverse problems like denoising and inpainting, adversarial training and gradient based meta-learning.
arXiv Detail & Related papers (2021-11-25T19:59:33Z) - ZoPE: A Fast Optimizer for ReLU Networks with Low-Dimensional Inputs [30.34898838361206]
We present an algorithm called ZoPE that solves optimization problems over the output of feedforward ReLU networks with low-dimensional inputs.
Using ZoPE, we observe a $25times speedup on property 1 of the ACAS Xu neural network verification benchmark and an $85times speedup on a set of linear optimization problems.
arXiv Detail & Related papers (2021-06-09T18:36:41Z) - Joint User Association and Power Allocation in Heterogeneous Ultra Dense
Network via Semi-Supervised Representation Learning [22.725452912879376]
Heterogeneous Ultra-Dense Network (HUDN) can enable higher connectivity density and ultra-high data rates.
This paper proposes a novel idea for resolving the joint user association and power control problem.
We train a Graph Neural Network (GNN) to approach this representation function by using semi-supervised learning.
arXiv Detail & Related papers (2021-03-29T06:39:51Z) - Resource Allocation via Graph Neural Networks in Free Space Optical
Fronthaul Networks [119.81868223344173]
This paper investigates the optimal resource allocation in free space optical (FSO) fronthaul networks.
We consider the graph neural network (GNN) for the policy parameterization to exploit the FSO network structure.
The primal-dual learning algorithm is developed to train the GNN in a model-free manner, where the knowledge of system models is not required.
arXiv Detail & Related papers (2020-06-26T14:20:48Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.