Seeking the Yield Barrier: High-Dimensional SRAM Evaluation Through
Optimal Manifold
- URL: http://arxiv.org/abs/2307.15773v1
- Date: Fri, 28 Jul 2023 19:21:39 GMT
- Title: Seeking the Yield Barrier: High-Dimensional SRAM Evaluation Through
Optimal Manifold
- Authors: Yanfang Liu, Guohao Dai and Wei W.Xing
- Abstract summary: We develop a novel yield estimation method, named Optimal Manifold Important Sampling (OPTIMIS)
OPTIMIS delivers state-of-the-art performance with robustness and consistency, with up to 3.5x in efficiency and 3x in accuracy over the best of SOTA methods in High-dimensional evaluation.
- Score: 3.258560324501261
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Being able to efficiently obtain an accurate estimate of the failure
probability of SRAM components has become a central issue as model circuits
shrink their scale to submicrometer with advanced technology nodes. In this
work, we revisit the classic norm minimization method. We then generalize it
with infinite components and derive the novel optimal manifold concept, which
bridges the surrogate-based and importance sampling (IS) yield estimation
methods. We then derive a sub-optimal manifold, optimal hypersphere, which
leads to an efficient sampling method being aware of the failure boundary
called onion sampling. Finally, we use a neural coupling flow (which learns
from samples like a surrogate model) as the IS proposal distribution. These
combinations give rise to a novel yield estimation method, named Optimal
Manifold Important Sampling (OPTIMIS), which keeps the advantages of the
surrogate and IS methods to deliver state-of-the-art performance with
robustness and consistency, with up to 3.5x in efficiency and 3x in accuracy
over the best of SOTA methods in High-dimensional SRAM evaluation.
Related papers
- Bridging Jensen Gap for Max-Min Group Fairness Optimization in Recommendation [63.66719748453878]
Group max-min fairness (MMF) is commonly used in fairness-aware recommender systems (RS) as an optimization objective.
We present an efficient and effective algorithm named FairDual, which utilizes a dual optimization technique to minimize the Jensen gap.
Our theoretical analysis demonstrates that FairDual can achieve a sub-linear convergence rate to the globally optimal solution.
arXiv Detail & Related papers (2025-02-13T13:33:45Z) - Reward-Guided Speculative Decoding for Efficient LLM Reasoning [80.55186052123196]
We introduce Reward-Guided Speculative Decoding (RSD), a novel framework aimed at improving the efficiency of inference in large language models (LLMs)
RSD incorporates a controlled bias to prioritize high-reward outputs, in contrast to existing speculative decoding methods that enforce strict unbiasedness.
RSD delivers significant efficiency gains against decoding with the target model only, while achieving significant better accuracy than parallel decoding method on average.
arXiv Detail & Related papers (2025-01-31T17:19:57Z) - Distributionally Robust Optimization as a Scalable Framework to Characterize Extreme Value Distributions [22.765095010254118]
The goal of this paper is to develop distributionally robust optimization (DRO) estimators, specifically for multidimensional Extreme Value Theory (EVT) statistics.
In order to mitigate over-conservative estimates while enhancing out-of-sample performance, we study DRO estimators informed by semi-parametric max-stable constraints in the space of point processes.
Both approaches are validated using synthetically generated data, recovering prescribed characteristics, and verifying the efficacy of the proposed techniques.
arXiv Detail & Related papers (2024-07-31T19:45:27Z) - M3D: Dataset Condensation by Minimizing Maximum Mean Discrepancy [26.227927019615446]
Training state-of-the-art (SOTA) deep models often requires extensive data, resulting in substantial training and storage costs.
dataset condensation has been developed to learn a small synthetic set that preserves essential information from the original large-scale dataset.
We present a novel DM-based method named M3D for dataset condensation by Minimizing the Maximum Mean Discrepancy.
arXiv Detail & Related papers (2023-12-26T07:45:32Z) - Provably Efficient Bayesian Optimization with Unknown Gaussian Process Hyperparameter Estimation [44.53678257757108]
We propose a new BO method that can sub-linearly converge to the objective function's global optimum.
Our method uses a multi-armed bandit technique (EXP3) to add random data points to the BO process.
We demonstrate empirically that our method outperforms existing approaches on various synthetic and real-world problems.
arXiv Detail & Related papers (2023-06-12T03:35:45Z) - Maximize to Explore: One Objective Function Fusing Estimation, Planning,
and Exploration [87.53543137162488]
We propose an easy-to-implement online reinforcement learning (online RL) framework called textttMEX.
textttMEX integrates estimation and planning components while balancing exploration exploitation automatically.
It can outperform baselines by a stable margin in various MuJoCo environments with sparse rewards.
arXiv Detail & Related papers (2023-05-29T17:25:26Z) - End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - High-Dimensional Yield Estimation using Shrinkage Deep Features and
Maximization of Integral Entropy Reduction [0.8522010776600341]
We propose an absolute deep learning, ASDK, which automatically identifies the dominant process variation parameters in a nonlinear kernel-correlated deep kernel.
Experiments on column circuits demonstrate the superiority of ASDK over the state-of-the-art (SOTA) approaches in terms of accuracy and efficiency with up to 10.3x speedup over SOTA methods.
arXiv Detail & Related papers (2022-12-05T08:39:41Z) - Optimization of Annealed Importance Sampling Hyperparameters [77.34726150561087]
Annealed Importance Sampling (AIS) is a popular algorithm used to estimates the intractable marginal likelihood of deep generative models.
We present a parameteric AIS process with flexible intermediary distributions and optimize the bridging distributions to use fewer number of steps for sampling.
We assess the performance of our optimized AIS for marginal likelihood estimation of deep generative models and compare it to other estimators.
arXiv Detail & Related papers (2022-09-27T07:58:25Z) - Distributed Dynamic Safe Screening Algorithms for Sparse Regularization [73.85961005970222]
We propose a new distributed dynamic safe screening (DDSS) method for sparsity regularized models and apply it on shared-memory and distributed-memory architecture respectively.
We prove that the proposed method achieves the linear convergence rate with lower overall complexity and can eliminate almost all the inactive features in a finite number of iterations almost surely.
arXiv Detail & Related papers (2022-04-23T02:45:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.