BoA-PTA, A Bayesian Optimization Accelerated Error-Free SPICE Solver
- URL: http://arxiv.org/abs/2108.00257v1
- Date: Sat, 31 Jul 2021 14:58:22 GMT
- Title: BoA-PTA, A Bayesian Optimization Accelerated Error-Free SPICE Solver
- Authors: Wei W. Xing, Xiang Jin, Yi Liu, Dan Niu, Weishen Zhao, Zhou Jin
- Abstract summary: pseudo transient analysis (PTA) has shown to be one of the most promising continuation SPICE solver.
We propose BoA-PTA, a Bayesian optimization accelerated PTA that can substantially accelerate simulations and improve convergence performance without introducing extra errors.
We assess BoA-PTA in 43 benchmark circuits against other SOTA SPICE solvers and demonstrate an average 2.3x (maximum 3.5x) speed-up over the original CEPTA.
- Score: 2.16151779631292
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the greatest challenges in IC design is the repeated executions of
computationally expensive SPICE simulations, particularly when highly complex
chip testing/verification is involved. Recently, pseudo transient analysis
(PTA) has shown to be one of the most promising continuation SPICE solver.
However, the PTA efficiency is highly influenced by the inserted
pseudo-parameters. In this work, we proposed BoA-PTA, a Bayesian optimization
accelerated PTA that can substantially accelerate simulations and improve
convergence performance without introducing extra errors. Furthermore, our
method does not require any pre-computation data or offline training. The
acceleration framework can either be implemented to speed up ongoing repeated
simulations immediately or to improve new simulations of completely different
circuits. BoA-PTA is equipped with cutting-edge machine learning techniques,
e.g., deep learning, Gaussian process, Bayesian optimization, non-stationary
monotonic transformation, and variational inference via parameterization. We
assess BoA-PTA in 43 benchmark circuits against other SOTA SPICE solvers and
demonstrate an average 2.3x (maximum 3.5x) speed-up over the original CEPTA.
Related papers
- Adaptive Surrogate-Based Strategy for Accelerating Convergence Speed when Solving Expensive Unconstrained Multi-Objective Optimisation Problems [41.99844472131922]
We propose an adaptive surrogate modelling approach designed to accelerate the early-stage convergence speed of state-of-the-art MOEAs.<n>This is important because it ensures that a solver can identify optimal or near-optimal solutions with relatively few fitness function evaluations.<n>Our approach was tested on 31 widely known benchmark problems and a real-world North Sea fish abundance modelling case study.
arXiv Detail & Related papers (2026-01-29T15:46:52Z) - Exploring Parallelism in FPGA-Based Accelerators for Machine Learning Applications [0.0]
Speculative backpropagation has emerged as a promising technique to accelerate the training of neural networks by overlapping the forward and backward passes.<n>We implement speculative backpropagation on the MNIST dataset using OpenMP as the parallel programming platform.
arXiv Detail & Related papers (2025-11-09T05:05:05Z) - PT$^2$-LLM: Post-Training Ternarization for Large Language Models [52.4629647715623]
Large Language Models (LLMs) have shown impressive capabilities across diverse tasks, but their large memory and compute demands hinder deployment.<n>We propose PT$2$-LLM, a post-training ternarization framework tailored for LLMs.<n>At its core is an Asymmetric Ternary Quantizer equipped with a two-stage refinement pipeline.
arXiv Detail & Related papers (2025-09-27T03:01:48Z) - Adaptive Bayesian Data-Driven Design of Reliable Solder Joints for Micro-electronic Devices [0.0]
Solder joint reliability related to failures due to thermomechanical loading is a critically important yet physically complex engineering problem.<n>In an increasingly data-driven world, the usage of efficient data-driven design schemes is a popular choice.<n>The authors argue that computational savings can be obtained from exploiting thorough surrogate modeling.
arXiv Detail & Related papers (2025-07-25T20:34:03Z) - Can Prompt Difficulty be Online Predicted for Accelerating RL Finetuning of Reasoning Models? [65.18157595903124]
This work investigates iterative approximate evaluation for arbitrary prompts.<n>It introduces Model Predictive Prompt Selection (MoPPS), a Bayesian risk-predictive framework.<n>MoPPS reliably predicts prompt difficulty and accelerates training with significantly reduced rollouts.
arXiv Detail & Related papers (2025-07-07T03:20:52Z) - Accelerating Model-Based Reinforcement Learning using Non-Linear Trajectory Optimization [2.1386708011362257]
This paper addresses the slow policy optimization convergence of Monte Carlo Probabilistic Inference for Learning Control (MC-PILCO)<n>It integrates it with iterative Linear Quadratic Regulator (iLQR), a fast trajectory optimization method suitable for nonlinear systems.<n> Experiments on the cart-pole task demonstrate that EB-MC-PILCO accelerates convergence compared to standard MC-PILCO.
arXiv Detail & Related papers (2025-06-03T11:30:59Z) - Joint Transmit and Pinching Beamforming for Pinching Antenna Systems (PASS): Optimization-Based or Learning-Based? [89.05848771674773]
A novel antenna system ()-enabled downlink multi-user multiple-input single-output (MISO) framework is proposed.
It consists of multiple waveguides, which equip numerous low-cost antennas, named (PAs)
The positions of PAs can be reconfigured to both spanning large-scale path and space.
arXiv Detail & Related papers (2025-02-12T18:54:10Z) - Machine-learning-based multipoint optimization of fluidic injection parameters for improving nozzle performance [2.5864426808687893]
This paper uses a pretrained neural network model to replace computational fluid dynamic (CFD) simulations.
Considering the physical characteristics of the nozzle flow field, a prior-based prediction strategy is adopted to enhance the model's transferability.
An improvement in the thrust coefficient of 1.14% is achieved, and the time cost is greatly reduced compared with the traditional optimization methods.
arXiv Detail & Related papers (2024-09-19T12:32:54Z) - Poisson Process for Bayesian Optimization [126.51200593377739]
We propose a ranking-based surrogate model based on the Poisson process and introduce an efficient BO framework, namely Poisson Process Bayesian Optimization (PoPBO)
Compared to the classic GP-BO method, our PoPBO has lower costs and better robustness to noise, which is verified by abundant experiments.
arXiv Detail & Related papers (2024-02-05T02:54:50Z) - Parallel Bayesian Optimization Using Satisficing Thompson Sampling for
Time-Sensitive Black-Box Optimization [0.0]
We propose satisficing Thompson sampling-based parallel BO approaches, including synchronous and asynchronous versions.
We shift the target from an optimal solution to a satisficing solution that is easier to learn.
The effectiveness of the proposed methods is demonstrated on a fast-charging design problem of Lithium-ion batteries.
arXiv Detail & Related papers (2023-10-19T07:03:51Z) - High-Dimensional Yield Estimation using Shrinkage Deep Features and
Maximization of Integral Entropy Reduction [0.8522010776600341]
We propose an absolute deep learning, ASDK, which automatically identifies the dominant process variation parameters in a nonlinear kernel-correlated deep kernel.
Experiments on column circuits demonstrate the superiority of ASDK over the state-of-the-art (SOTA) approaches in terms of accuracy and efficiency with up to 10.3x speedup over SOTA methods.
arXiv Detail & Related papers (2022-12-05T08:39:41Z) - Advancing Model Pruning via Bi-level Optimization [89.88761425199598]
iterative magnitude pruning (IMP) is the predominant pruning method to successfully find 'winning tickets'
One-shot pruning methods have been developed, but these schemes are usually unable to find winning tickets as good as IMP.
We show that the proposed bi-level optimization-oriented pruning method (termed BiP) is a special class of BLO problems with a bi-linear problem structure.
arXiv Detail & Related papers (2022-10-08T19:19:29Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - Deep Equilibrium Optical Flow Estimation [80.80992684796566]
Recent state-of-the-art (SOTA) optical flow models use finite-step recurrent update operations to emulate traditional algorithms.
These RNNs impose large computation and memory overheads, and are not directly trained to model such stable estimation.
We propose deep equilibrium (DEQ) flow estimators, an approach that directly solves for the flow as the infinite-level fixed point of an implicit layer.
arXiv Detail & Related papers (2022-04-18T17:53:44Z) - Online Convolutional Re-parameterization [51.97831675242173]
We present online convolutional re- parameterization (OREPA), a two-stage pipeline, aiming to reduce the huge training overhead by squeezing the complex training-time block into a single convolution.
Compared with the state-of-the-art re-param models, OREPA is able to save the training-time memory cost by about 70% and accelerate the training speed by around 2x.
We also conduct experiments on object detection and semantic segmentation and show consistent improvements on the downstream tasks.
arXiv Detail & Related papers (2022-04-02T09:50:19Z) - Tuning Particle Accelerators with Safety Constraints using Bayesian
Optimization [73.94660141019764]
tuning machine parameters of particle accelerators is a repetitive and time-consuming task.
We propose and evaluate a step size-limited variant of safe Bayesian optimization.
arXiv Detail & Related papers (2022-03-26T02:21:03Z) - Federated Learning via Intelligent Reflecting Surface [30.935389187215474]
Over-the-air computation algorithm (AirComp) based learning (FL) is capable of achieving fast model aggregation by exploiting the waveform superposition property of multiple access channels.
In this paper, we propose a two-step optimization framework to achieve fast yet reliable model aggregation for AirComp-based FL.
Simulation results will demonstrate that our proposed framework and the deployment of an IRS can achieve a lower training loss and higher FL prediction accuracy than the baseline algorithms.
arXiv Detail & Related papers (2020-11-10T11:29:57Z) - High Dimensional Bayesian Optimization Assisted by Principal Component
Analysis [4.030481609048958]
We introduce a novel PCA-assisted BO (PCA-BO) algorithm for high-dimensional numerical optimization problems.
We show that PCA-BO can effectively reduce the CPU time incurred on high-dimensional problems, and maintains the convergence rate on problems with an adequate global structure.
arXiv Detail & Related papers (2020-07-02T07:03:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.