PowerNet: Transferable Dynamic IR Drop Estimation via Maximum
Convolutional Neural Network
- URL: http://arxiv.org/abs/2011.13494v1
- Date: Thu, 26 Nov 2020 23:14:17 GMT
- Title: PowerNet: Transferable Dynamic IR Drop Estimation via Maximum
Convolutional Neural Network
- Authors: Zhiyao Xie, Haoxing Ren, Brucek Khailany, Ye Sheng, Santosh Santosh,
Jiang Hu, Yiran Chen
- Abstract summary: We develop a fast dynamic IR drop estimation technique, named PowerNet, based on a convolutional neural network (CNN)
We show that PowerNet outperforms the latest machine learning (ML) method by 9% in accuracy for the challenging case of vectorless IR drop.
A mitigation tool guided by PowerNet reduces IR drop hotspots by 26% and 31% on two industrial designs, respectively.
- Score: 28.555489230660488
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: IR drop is a fundamental constraint required by almost all chip designs.
However, its evaluation usually takes a long time that hinders mitigation
techniques for fixing its violations. In this work, we develop a fast dynamic
IR drop estimation technique, named PowerNet, based on a convolutional neural
network (CNN). It can handle both vector-based and vectorless IR analyses.
Moreover, the proposed CNN model is general and transferable to different
designs. This is in contrast to most existing machine learning (ML) approaches,
where a model is applicable only to a specific design. Experimental results
show that PowerNet outperforms the latest ML method by 9% in accuracy for the
challenging case of vectorless IR drop and achieves a 30 times speedup compared
to an accurate IR drop commercial tool. Further, a mitigation tool guided by
PowerNet reduces IR drop hotspots by 26% and 31% on two industrial designs,
respectively, with very limited modification on their power grids.
Related papers
- CFIRSTNET: Comprehensive Features for Static IR Drop Estimation with Neural Network [3.1761323820497656]
We propose a comprehensive solution to combine image-based and netlist-based features in neural network framework.
A customized convolutional neural network (CNN) is developed to extract PDN features and make static IR drop estimations.
Experiment results show that we have obtained the best quality in the benchmark on the problem of IR drop estimation in ICCAD CAD Contest 2023.
arXiv Detail & Related papers (2025-02-13T06:47:53Z) - Estimating Voltage Drop: Models, Features and Data Representation Towards a Neural Surrogate [1.7010199949406575]
We investigate how Machine Learning (ML) techniques can aid in reducing the computational effort and implicitly the time required to estimate the voltage drop in Integrated Circuits (ICs)
Our approach leverages ASICs' electrical, timing, and physical to train ML models, ensuring adaptability across diverse designs with minimal adjustments.
This study illustrates the effectiveness of ML algorithms in precisely estimating IR drop and optimizing ASIC sign-off.
arXiv Detail & Related papers (2025-02-07T21:31:13Z) - Static IR Drop Prediction with Attention U-Net and Saliency-Based Explainability [0.34530027457862006]
We propose a U-Net neural network model with attention gates which is specifically tailored to achieve fast and accurate image-based static IR drop prediction.
We show the number of high IR drop pixels can be reduced on-average by 18% by mimicking upsize of a tiny portion of PDN's resistive edges.
arXiv Detail & Related papers (2024-08-06T16:41:33Z) - PDNNet: PDN-Aware GNN-CNN Heterogeneous Network for Dynamic IR Drop Prediction [5.511978576494924]
IR drop on the power delivery network (PDN) is closely related to PDN's configuration and cell current consumption.
We propose a novel graph structure, PDNGraph, to unify the representations of the PDN structure and the fine-grained cell-PDN relation.
We are the first work to apply graph structure to deep-learning based dynamic IR drop prediction method.
arXiv Detail & Related papers (2024-03-27T13:50:13Z) - Auto-Train-Once: Controller Network Guided Automatic Network Pruning from Scratch [72.26822499434446]
Auto-Train-Once (ATO) is an innovative network pruning algorithm designed to automatically reduce the computational and storage costs of DNNs.
We provide a comprehensive convergence analysis as well as extensive experiments, and the results show that our approach achieves state-of-the-art performance across various model architectures.
arXiv Detail & Related papers (2024-03-21T02:33:37Z) - Latency-aware Unified Dynamic Networks for Efficient Image Recognition [72.8951331472913]
LAUDNet is a framework to bridge the theoretical and practical efficiency gap in dynamic networks.
It integrates three primary dynamic paradigms-spatially adaptive computation, dynamic layer skipping, and dynamic channel skipping.
It can notably reduce the latency of models like ResNet by over 50% on platforms such as V100,3090, and TX2 GPUs.
arXiv Detail & Related papers (2023-08-30T10:57:41Z) - DiffIR: Efficient Diffusion Model for Image Restoration [108.82579440308267]
Diffusion model (DM) has achieved SOTA performance by modeling the image synthesis process into a sequential application of a denoising network.
Traditional DMs running massive iterations on a large model to estimate whole images or feature maps is inefficient for image restoration.
We propose DiffIR, which consists of a compact IR prior extraction network (CPEN), dynamic IR transformer (DIRformer), and denoising network.
arXiv Detail & Related papers (2023-03-16T16:47:14Z) - Fast Exploration of the Impact of Precision Reduction on Spiking Neural
Networks [63.614519238823206]
Spiking Neural Networks (SNNs) are a practical choice when the target hardware reaches the edge of computing.
We employ an Interval Arithmetic (IA) model to develop an exploration methodology that takes advantage of the capability of such a model to propagate the approximation error.
arXiv Detail & Related papers (2022-11-22T15:08:05Z) - Intelligent Circuit Design and Implementation with Machine Learning [0.0]
I present multiple fast yet accurate machine learning models covering a wide range of chip design stages.
I present APOLLO, a fully automated power modeling framework.
I also present RouteNet for early routability prediction.
arXiv Detail & Related papers (2022-06-07T06:17:52Z) - AdderNet and its Minimalist Hardware Design for Energy-Efficient
Artificial Intelligence [111.09105910265154]
We present a novel minimalist hardware architecture using adder convolutional neural network (AdderNet)
The whole AdderNet can practically achieve 16% enhancement in speed.
We conclude the AdderNet is able to surpass all the other competitors.
arXiv Detail & Related papers (2021-01-25T11:31:52Z) - ShiftAddNet: A Hardware-Inspired Deep Network [87.18216601210763]
ShiftAddNet is an energy-efficient multiplication-less deep neural network.
It leads to both energy-efficient inference and training, without compromising expressive capacity.
ShiftAddNet aggressively reduces over 80% hardware-quantified energy cost of DNNs training and inference, while offering comparable or better accuracies.
arXiv Detail & Related papers (2020-10-24T05:09:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.