Cell Library Characterization for Composite Current Source Models Based on Gaussian Process Regression and Active Learning
- URL: http://arxiv.org/abs/2505.10799v2
- Date: Fri, 23 May 2025 09:18:26 GMT
- Title: Cell Library Characterization for Composite Current Source Models Based on Gaussian Process Regression and Active Learning
- Authors: Tao Bai, Junzhuo Zhou, Zeyuan Deng, Ting-Jung Lin, Wei Xing, Peng Cao, Lei He,
- Abstract summary: High accuracy requirement, large amount of data and extensive simulation cost pose severe challenges to CCS characterization.<n>We introduce a novel Gaussian Process Regression(GPR) model with active learning(AL) to establish the characterization framework efficiently and accurately.<n>Our approach significantly outperforms conventional commercial tools as well as learning based approaches by achieving an average absolute error of 2.05 ps and a relative error of 2.27% for current waveform of 57 cells under 9 process, voltage, temperature (PVT) corners with TSMC 22nm process.
- Score: 11.223940154559635
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The composite current source (CCS) model has been adopted as an advanced timing model that represents the current behavior of cells for improved accuracy and better capability than traditional non-linear delay models (NLDM) to model complex dynamic effects and interactions under advanced process nodes. However, the high accuracy requirement, large amount of data and extensive simulation cost pose severe challenges to CCS characterization. To address these challenges, we introduce a novel Gaussian Process Regression(GPR) model with active learning(AL) to establish the characterization framework efficiently and accurately. Our approach significantly outperforms conventional commercial tools as well as learning based approaches by achieving an average absolute error of 2.05 ps and a relative error of 2.27% for current waveform of 57 cells under 9 process, voltage, temperature (PVT) corners with TSMC 22nm process. Additionally, our model drastically reduces the runtime to 27% and the storage by up to 19.5x compared with that required by commercial tools.
Related papers
- Online high-precision prediction method for injection molding product weight by integrating time series/non-time series mixed features and feature attention mechanism [3.5881814709064934]
This study proposes a mixed feature attention-artificial neural network (MFA-ANN) model for high-precision online prediction of product weight.<n>Results demonstrate that the MFA-ANN model achieves a RMSE of 0.0281 with 0.5 g weight fluctuation tolerance, outperforming conventional benchmarks.
arXiv Detail & Related papers (2025-06-23T08:40:50Z) - Lightweight Task-Oriented Semantic Communication Empowered by Large-Scale AI Models [66.57755931421285]
Large-scale artificial intelligence (LAI) models pose significant challenges for real-time communication scenarios.<n>This paper proposes utilizing knowledge distillation (KD) techniques to extract and condense knowledge from LAI models.<n>We propose a fast distillation method featuring a pre-stored compression mechanism that eliminates the need for repetitive inference.
arXiv Detail & Related papers (2025-06-16T08:42:16Z) - Flow-GRPO: Training Flow Matching Models via Online RL [75.70017261794422]
We propose Flow-GRPO, the first method integrating online reinforcement learning (RL) into flow matching models.<n>Our approach uses two key strategies: (1) an ODE-to-SDE conversion that transforms a deterministic Ordinary Equation (ODE) into an equivalent Differential Equation (SDE) that matches the original model's marginal distribution at all timesteps; and (2) a Denoising Reduction strategy that reduces training denoising steps while retaining the original inference timestep number.
arXiv Detail & Related papers (2025-05-08T17:58:45Z) - One-Step Diffusion Distillation through Score Implicit Matching [74.91234358410281]
We present Score Implicit Matching (SIM) a new approach to distilling pre-trained diffusion models into single-step generator models.
SIM shows strong empirical performances for one-step generators.
By applying SIM to a leading transformer-based diffusion model, we distill a single-step generator for text-to-image generation.
arXiv Detail & Related papers (2024-10-22T08:17:20Z) - Probabilistic Surrogate Model for Accelerating the Design of Electric Vehicle Battery Enclosures for Crash Performance [0.0]
This paper presents a probabilistic surrogate model for the accelerated design of electric vehicle battery enclosures with a focus on crash performance.
The model was trained using data generated from thermoforming and crash simulations over a range of material and process parameters.
validation against new simulation data demonstrated the model's predictive accuracy with mean absolute percentage errors within 8.08% for all output variables.
arXiv Detail & Related papers (2024-08-06T21:03:16Z) - Fast Cell Library Characterization for Design Technology Co-Optimization Based on Graph Neural Networks [0.1752969190744922]
Design technology co-optimization (DTCO) plays a critical role in achieving optimal power, performance, and area.
We propose a graph neural network (GNN)-based machine learning model for rapid and accurate cell library characterization.
arXiv Detail & Related papers (2023-12-20T06:10:27Z) - Test-Time Adaptation Induces Stronger Accuracy and Agreement-on-the-Line [65.14099135546594]
Recent test-time adaptation (TTA) methods drastically strengthen the ACL and AGL trends in models, even in shifts where models showed very weak correlations before.
Our results show that by combining TTA with AGL-based estimation methods, we can estimate the OOD performance of models with high precision for a broader set of distribution shifts.
arXiv Detail & Related papers (2023-10-07T23:21:25Z) - Robust Learning with Progressive Data Expansion Against Spurious
Correlation [65.83104529677234]
We study the learning process of a two-layer nonlinear convolutional neural network in the presence of spurious features.
Our analysis suggests that imbalanced data groups and easily learnable spurious features can lead to the dominance of spurious features during the learning process.
We propose a new training algorithm called PDE that efficiently enhances the model's robustness for a better worst-group performance.
arXiv Detail & Related papers (2023-06-08T05:44:06Z) - A machine learning approach to the prediction of heat-transfer
coefficients in micro-channels [4.724825031148412]
The accurate prediction of the two-phase heat transfer coefficient (HTC) is key to the optimal design and operation of compact heat exchangers.
We use a multi-output Gaussian process regression (GPR) to estimate the HTC in microchannels as a function of the mass flow rate, heat flux, system pressure and channel diameter and length.
arXiv Detail & Related papers (2023-05-28T15:48:01Z) - Environment Transformer and Policy Optimization for Model-Based Offline
Reinforcement Learning [25.684201757101267]
We propose an uncertainty-aware sequence modeling architecture called Environment Transformer.
Benefiting from the accurate modeling of the transition dynamics and reward function, Environment Transformer can be combined with arbitrary planning, dynamics programming, or policy optimization algorithms for offline RL.
arXiv Detail & Related papers (2023-03-07T11:26:09Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.