A Deep-Learning-Aided Pipeline for Efficient Post-Silicon Tuning
- URL: http://arxiv.org/abs/2207.00336v1
- Date: Fri, 1 Jul 2022 11:04:53 GMT
- Title: A Deep-Learning-Aided Pipeline for Efficient Post-Silicon Tuning
- Authors: Yiwen Liao, Bin Yang, Rapha\"el Latty, Jochen Rivoir
- Abstract summary: In post-silicon validation, tuning is to find the values for the tuning knobs, potentially as a function of process parameters and/or known operating conditions.
We leverage neural networks to efficiently select the most relevant variables and present a corresponding deep-learning-aided pipeline for efficient tuning.
- Score: 5.904240881373805
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In post-silicon validation, tuning is to find the values for the tuning
knobs, potentially as a function of process parameters and/or known operating
conditions. In this sense, an more efficient tuning requires identifying the
most critical tuning knobs and process parameters in terms of a given
figure-of-merit for a Device Under Test (DUT). This is often manually conducted
by experienced experts. However, with increasingly complex chips, manual
inspection on a large amount of raw variables has become more challenging. In
this work, we leverage neural networks to efficiently select the most relevant
variables and present a corresponding deep-learning-aided pipeline for
efficient tuning.
Related papers
- Dynamic Tuning Towards Parameter and Inference Efficiency for ViT Adaptation [67.13876021157887]
Dynamic Tuning (DyT) is a novel approach to improve both parameter and inference efficiency for ViT adaptation.
DyT achieves superior performance compared to existing PEFT methods while evoking only 71% of their FLOPs on the VTAB-1K benchmark.
arXiv Detail & Related papers (2024-03-18T14:05:52Z) - Attention Prompt Tuning: Parameter-efficient Adaptation of Pre-trained
Models for Spatiotemporal Modeling [32.603558214472265]
We introduce Attention Prompt Tuning (APT) for video-based applications such as action recognition.
APT involves injecting a set of learnable prompts along with data tokens during fine-tuning while keeping the backbone frozen.
The proposed approach greatly reduces the number of FLOPs and latency while achieving a significant performance boost.
arXiv Detail & Related papers (2024-03-11T17:59:41Z) - Universality and Limitations of Prompt Tuning [65.8354898840308]
We take one of the first steps to understand the role of soft-prompt tuning for transformer-based architectures.
We analyze prompt tuning from the lens of universality and limitations with finite-depth pretrained transformers for continuous-valued functions.
Our result guarantees the existence of a strong transformer with a prompt to approximate any sequence-to-sequence function in the set of Lipschitz functions.
arXiv Detail & Related papers (2023-05-30T06:47:07Z) - Sensitivity-Aware Visual Parameter-Efficient Fine-Tuning [91.5113227694443]
We propose a novel visual.
sensuous-aware fine-Tuning (SPT) scheme.
SPT allocates trainable parameters to task-specific important positions.
Experiments on a wide range of downstream recognition tasks show that our SPT is complementary to the existing PEFT methods.
arXiv Detail & Related papers (2023-03-15T12:34:24Z) - Hyper-Parameter Auto-Tuning for Sparse Bayesian Learning [72.83293818245978]
We design and learn a neural network (NN)-based auto-tuner for hyper- parameter tuning in sparse Bayesian learning.
We show that considerable improvement in convergence rate and recovery performance can be achieved.
arXiv Detail & Related papers (2022-11-09T12:34:59Z) - Performance-Driven Controller Tuning via Derivative-Free Reinforcement
Learning [6.5158195776494]
We tackle the controller tuning problem using a novel derivative-free reinforcement learning framework.
We conduct numerical experiments on two concrete examples from autonomous driving, namely, adaptive cruise control with PID controller and trajectory tracking with MPC controller.
Experimental results show that the proposed method outperforms popular baselines and highlight its strong potential for controller tuning.
arXiv Detail & Related papers (2022-09-11T13:01:14Z) - Empowering parameter-efficient transfer learning by recognizing the
kernel structure in self-attention [53.72897232951918]
We propose adapters that utilize the kernel structure in self-attention to guide the assignment of tunable parameters.
Our results show that our proposed adapters can attain or improve the strong performance of existing baselines.
arXiv Detail & Related papers (2022-05-07T20:52:54Z) - Amortized Auto-Tuning: Cost-Efficient Transfer Optimization for
Hyperparameter Recommendation [83.85021205445662]
We propose an instantiation--amortized auto-tuning (AT2) to speed up tuning of machine learning models.
We conduct a thorough analysis of the multi-task multi-fidelity Bayesian optimization framework, which leads to the best instantiation--amortized auto-tuning (AT2)
arXiv Detail & Related papers (2021-06-17T00:01:18Z) - DoT: An efficient Double Transformer for NLP tasks with tables [3.0079490585515343]
DoT is a double transformer model that decomposes the problem into two sub-tasks.
We show that for a small drop of accuracy, DoT improves training and inference time by at least 50%.
arXiv Detail & Related papers (2021-06-01T13:33:53Z) - Hyperparameter Transfer Learning with Adaptive Complexity [5.695163312473305]
We propose a new multi-task BO method that learns a set of ordered, non-linear basis functions of increasing complexity via nested drop-out and automatic relevance determination.
arXiv Detail & Related papers (2021-02-25T12:26:52Z) - Deep reinforcement learning for smart calibration of radio telescopes [3.655021726150368]
We introduce the use of reinforcement learning to train an autonomous agent to perform fine tuning of data calibration pipelines.
We consider the pipeline to be a black-box system where only an interpreted state of the pipeline is used by the agent.
The autonomous agent trained in this manner is able to determine optimal settings for diverse observations and is therefore able to perform'smart' calibration, minimizing the need for human intervention.
arXiv Detail & Related papers (2021-02-05T14:35:28Z) - Operation-Aware Soft Channel Pruning using Differentiable Masks [51.04085547997066]
We propose a data-driven algorithm, which compresses deep neural networks in a differentiable way by exploiting the characteristics of operations.
We perform extensive experiments and achieve outstanding performance in terms of the accuracy of output networks.
arXiv Detail & Related papers (2020-07-08T07:44:00Z) - AdaS: Adaptive Scheduling of Stochastic Gradients [50.80697760166045]
We introduce the notions of textit"knowledge gain" and textit"mapping condition" and propose a new algorithm called Adaptive Scheduling (AdaS)
Experimentation reveals that, using the derived metrics, AdaS exhibits: (a) faster convergence and superior generalization over existing adaptive learning methods; and (b) lack of dependence on a validation set to determine when to stop training.
arXiv Detail & Related papers (2020-06-11T16:36:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.