Worst-Case Dynamic Power Distribution Network Noise Prediction Using
Convolutional Neural Network
- URL: http://arxiv.org/abs/2204.13109v1
- Date: Wed, 27 Apr 2022 08:37:10 GMT
- Title: Worst-Case Dynamic Power Distribution Network Noise Prediction Using
Convolutional Neural Network
- Authors: Xiao Dong, Yufei Chen, Xunzhao Yin, Cheng Zhuo
- Abstract summary: Worst-case dynamic PDN noise analysis is an essential step in PDN sign-off to ensure the performance and reliability of chips.
This paper proposes an efficient and scalable framework for the worst-case dynamic PDN noise prediction.
- Score: 14.144190519120167
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Worst-case dynamic PDN noise analysis is an essential step in PDN sign-off to
ensure the performance and reliability of chips. However, with the growing PDN
size and increasing scenarios to be validated, it becomes very time- and
resource-consuming to conduct full-stack PDN simulation to check the worst-case
noise for different test vectors. Recently, various works have proposed machine
learning based methods for supply noise prediction, many of which still suffer
from large training overhead, inefficiency, or non-scalability. Thus, this
paper proposed an efficient and scalable framework for the worst-case dynamic
PDN noise prediction. The framework first reduces the spatial and temporal
redundancy in the PDN and input current vector, and then employs efficient
feature extraction as well as a novel convolutional neural network architecture
to predict the worst-case dynamic PDN noise. Experimental results show that the
proposed framework consistently outperforms the commercial tool and the
state-of-the-art machine learning method with only 0.63-1.02% mean relative
error and 25-69$\times$ speedup.
Related papers
- Efficient Noise Mitigation for Enhancing Inference Accuracy in DNNs on Mixed-Signal Accelerators [4.416800723562206]
We model process-induced and aging-related variations of analog computing components on the accuracy of the analog neural networks.
We introduce a denoising block inserted between selected layers of a pre-trained model.
We demonstrate that training the denoising block significantly increases the model's robustness against various noise levels.
arXiv Detail & Related papers (2024-09-27T08:45:55Z) - A Real-Time Voice Activity Detection Based On Lightweight Neural [4.589472292598182]
Voice activity detection (VAD) is the task of detecting speech in an audio stream.
Recent neural network-based VADs have alleviated the degradation of performance to some extent.
We propose a lightweight and real-time neural network called MagicNet, which utilizes casual and depth separable 1-D convolutions and GRU.
arXiv Detail & Related papers (2024-05-27T03:31:16Z) - sVAD: A Robust, Low-Power, and Light-Weight Voice Activity Detection
with Spiking Neural Networks [51.516451451719654]
Spiking Neural Networks (SNNs) are known to be biologically plausible and power-efficient.
This paper introduces a novel SNN-based Voice Activity Detection model, referred to as sVAD.
It provides effective auditory feature representation through SincNet and 1D convolution, and improves noise robustness with attention mechanisms.
arXiv Detail & Related papers (2024-03-09T02:55:44Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - CorrectNet: Robustness Enhancement of Analog In-Memory Computing for
Neural Networks by Error Suppression and Compensation [4.570841222958966]
We propose a framework to enhance the robustness of neural networks under variations and noise.
We show that inference accuracy of neural networks can be recovered from as low as 1.69% under variations and noise.
arXiv Detail & Related papers (2022-11-27T19:13:33Z) - Design and Prototyping Distributed CNN Inference Acceleration in Edge
Computing [85.74517957717363]
HALP accelerates inference by designing a seamless collaboration among edge devices (EDs) in Edge Computing.
Experiments show that the distributed inference HALP achieves 1.7x inference acceleration for VGG-16.
It is shown that the model selection with distributed inference HALP can significantly improve service reliability.
arXiv Detail & Related papers (2022-11-24T19:48:30Z) - Noise Injection Node Regularization for Robust Learning [0.0]
Noise Injection Node Regularization (NINR) is a method of injecting structured noise into Deep Neural Networks (DNN) during the training stage, resulting in an emergent regularizing effect.
We present theoretical and empirical evidence for substantial improvement in robustness against various test data perturbations for feed-forward DNNs when trained under NINR.
arXiv Detail & Related papers (2022-10-27T20:51:15Z) - An Adaptive Device-Edge Co-Inference Framework Based on Soft
Actor-Critic [72.35307086274912]
High-dimension parameter model and large-scale mathematical calculation restrict execution efficiency, especially for Internet of Things (IoT) devices.
We propose a new Deep Reinforcement Learning (DRL)-Soft Actor Critic for discrete (SAC-d), which generates the emphexit point, emphexit point, and emphcompressing bits by soft policy iterations.
Based on the latency and accuracy aware reward design, such an computation can well adapt to the complex environment like dynamic wireless channel and arbitrary processing, and is capable of supporting the 5G URL
arXiv Detail & Related papers (2022-01-09T09:31:50Z) - Rate Distortion Characteristic Modeling for Neural Image Compression [59.25700168404325]
End-to-end optimization capability offers neural image compression (NIC) superior lossy compression performance.
distinct models are required to be trained to reach different points in the rate-distortion (R-D) space.
We make efforts to formulate the essential mathematical functions to describe the R-D behavior of NIC using deep network and statistical modeling.
arXiv Detail & Related papers (2021-06-24T12:23:05Z) - Deep Networks for Direction-of-Arrival Estimation in Low SNR [89.45026632977456]
We introduce a Convolutional Neural Network (CNN) that is trained from mutli-channel data of the true array manifold matrix.
We train a CNN in the low-SNR regime to predict DoAs across all SNRs.
Our robust solution can be applied in several fields, ranging from wireless array sensors to acoustic microphones or sonars.
arXiv Detail & Related papers (2020-11-17T12:52:18Z) - Robust Processing-In-Memory Neural Networks via Noise-Aware
Normalization [26.270754571140735]
PIM accelerators often suffer from intrinsic noise in the physical components.
We propose a noise-agnostic method to achieve robust neural network performance against any noise setting.
arXiv Detail & Related papers (2020-07-07T06:51:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.