A Machine Learning Approach to Improving Timing Consistency between
Global Route and Detailed Route
- URL: http://arxiv.org/abs/2305.06917v2
- Date: Mon, 2 Oct 2023 18:23:26 GMT
- Title: A Machine Learning Approach to Improving Timing Consistency between
Global Route and Detailed Route
- Authors: Vidya A. Chhabria, Wenjing Jiang, Andrew B. Kahng, Sachin S.
Sapatnekar
- Abstract summary: Inaccurate timing prediction wastes design effort, hurts circuit performance, and may lead to design failure.
This work focuses on timing prediction after clock tree synthesis and placement legalization, which is the earliest opportunity to time and optimize a "complete" netlist.
To bridge the gap between GR-based parasitic and timing estimation and post-DR results during post-GR optimization, machine learning (ML)-based models are proposed.
- Score: 3.202646674984817
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to the unavailability of routing information in design stages prior to
detailed routing (DR), the tasks of timing prediction and optimization pose
major challenges. Inaccurate timing prediction wastes design effort, hurts
circuit performance, and may lead to design failure. This work focuses on
timing prediction after clock tree synthesis and placement legalization, which
is the earliest opportunity to time and optimize a "complete" netlist. The
paper first documents that having "oracle knowledge" of the final post-DR
parasitics enables post-global routing (GR) optimization to produce improved
final timing outcomes. To bridge the gap between GR-based parasitic and timing
estimation and post-DR results during post-GR optimization, machine learning
(ML)-based models are proposed, including the use of features for macro
blockages for accurate predictions for designs with macros. Based on a set of
experimental evaluations, it is demonstrated that these models show higher
accuracy than GR-based timing estimation. When used during post-GR
optimization, the ML-based models show demonstrable improvements in post-DR
circuit performance. The methodology is applied to two different tool flows -
OpenROAD and a commercial tool flow - and results on 45nm bulk and 12nm FinFET
enablements show improvements in post-DR slack metrics without increasing
congestion. The models are demonstrated to be generalizable to designs
generated under different clock period constraints and are robust to training
data with small levels of noise.
Related papers
- TrackFormers: In Search of Transformer-Based Particle Tracking for the High-Luminosity LHC Era [2.9052912091435923]
High-Energy Physics experiments are facing a multi-fold data increase with every new iteration.
One such step in need of an overhaul is the task of particle track reconstruction, a.k.a., tracking.
A Machine Learning-assisted solution is expected to provide significant improvements.
arXiv Detail & Related papers (2024-07-09T18:47:25Z) - Align Your Steps: Optimizing Sampling Schedules in Diffusion Models [63.927438959502226]
Diffusion models (DMs) have established themselves as the state-of-the-art generative modeling approach in the visual domain and beyond.
A crucial drawback of DMs is their slow sampling speed, relying on many sequential function evaluations through large neural networks.
We propose a general and principled approach to optimizing the sampling schedules of DMs for high-quality outputs.
arXiv Detail & Related papers (2024-04-22T18:18:41Z) - A Transformer-based Framework For Multi-variate Time Series: A Remaining
Useful Life Prediction Use Case [4.0466311968093365]
This work proposed an encoder-transformer architecture-based framework for time series prediction.
We validated the effectiveness of the proposed framework on all four sets of the C-MAPPS benchmark dataset.
To enable the model awareness of the initial stages of the machine life and its degradation path, a novel expanding window method was proposed.
arXiv Detail & Related papers (2023-08-19T02:30:35Z) - GraPhSyM: Graph Physical Synthesis Model [21.568740364211983]
We introduce GraPhSyM, a Graph Attention Network (GATv2) model for estimation of post-physical circuit delay and area metrics from pre-physical synthesis circuit netlists.
GraPhSyM provides accurate visibility of final design metrics to early EDA stages, enabling global co-optimization across stages.
arXiv Detail & Related papers (2023-08-07T23:19:34Z) - End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - GC-GRU-N for Traffic Prediction using Loop Detector Data [5.735035463793008]
We use Seattle loop detector data aggregated over 15 minutes and reframe the problem through space time.
The model ranked second with the fastest inference time and a very close performance to first place (Transformers)
arXiv Detail & Related papers (2022-11-13T06:32:28Z) - Efficient Graph Neural Network Inference at Large Scale [54.89457550773165]
Graph neural networks (GNNs) have demonstrated excellent performance in a wide range of applications.
Existing scalable GNNs leverage linear propagation to preprocess the features and accelerate the training and inference procedure.
We propose a novel adaptive propagation order approach that generates the personalized propagation order for each node based on its topological information.
arXiv Detail & Related papers (2022-11-01T14:38:18Z) - Training Robust Deep Models for Time-Series Domain: Novel Algorithms and
Theoretical Analysis [32.45387153404849]
We propose a novel framework referred as RObust Training for Time-Series (RO-TS) to create robust DNNs for time-series classification tasks.
We show the generality and advantages of our formulation using the summation structure over time-series alignments.
Our experiments on real-world benchmarks demonstrate that RO-TS creates more robust DNNs when compared to adversarial training.
arXiv Detail & Related papers (2022-07-09T17:21:03Z) - Distributionally Robust Models with Parametric Likelihood Ratios [123.05074253513935]
Three simple ideas allow us to train models with DRO using a broader class of parametric likelihood ratios.
We find that models trained with the resulting parametric adversaries are consistently more robust to subpopulation shifts when compared to other DRO approaches.
arXiv Detail & Related papers (2022-04-13T12:43:12Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z) - PnPNet: End-to-End Perception and Prediction with Tracking in the Loop [82.97006521937101]
We tackle the problem of joint perception and motion forecasting in the context of self-driving vehicles.
We propose Net, an end-to-end model that takes as input sensor data, and outputs at each time step object tracks and their future level.
arXiv Detail & Related papers (2020-05-29T17:57:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.