FIST: A Feature-Importance Sampling and Tree-Based Method for Automatic
Design Flow Parameter Tuning
- URL: http://arxiv.org/abs/2011.13493v1
- Date: Thu, 26 Nov 2020 23:13:42 GMT
- Title: FIST: A Feature-Importance Sampling and Tree-Based Method for Automatic
Design Flow Parameter Tuning
- Authors: Zhiyao Xie, Guan-Qi Fang, Yu-Hung Huang, Haoxing Ren, Yanqing Zhang,
Brucek Khailany, Shao-Yun Fang, Jiang Hu, Yiran Chen, Erick Carvajal Barboza
- Abstract summary: We introduce a machine learning-based automatic parameter tuning methodology that aims to find the best design quality with a limited number of trials.
We leverage a state-of-the-art XGBoost model and propose a novel dynamic tree technique to overcome overfitting.
Experimental results on benchmark circuits show that our approach achieves 25% improvement in design quality or reduction in sampling cost.
- Score: 27.08970520268831
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Design flow parameters are of utmost importance to chip design quality and
require a painfully long time to evaluate their effects. In reality, flow
parameter tuning is usually performed manually based on designers' experience
in an ad hoc manner. In this work, we introduce a machine learning-based
automatic parameter tuning methodology that aims to find the best design
quality with a limited number of trials. Instead of merely plugging in machine
learning engines, we develop clustering and approximate sampling techniques for
improving tuning efficiency. The feature extraction in this method can reuse
knowledge from prior designs. Furthermore, we leverage a state-of-the-art
XGBoost model and propose a novel dynamic tree technique to overcome
overfitting. Experimental results on benchmark circuits show that our approach
achieves 25% improvement in design quality or 37% reduction in sampling cost
compared to random forest method, which is the kernel of a highly cited
previous work. Our approach is further validated on two industrial designs. By
sampling less than 0.02% of possible parameter sets, it reduces area by 1.83%
and 1.43% compared to the best solutions hand-tuned by experienced designers.
Related papers
- Advanced Chain-of-Thought Reasoning for Parameter Extraction from Documents Using Large Language Models [3.7324910012003656]
Current methods struggle to handle high-dimensional design data and meet the demands of real-time processing.
We propose an innovative framework that automates the extraction of parameters and the generation of PySpice models.
Experimental results show that applying all three methods together improves retrieval precision by 47.69% and reduces processing latency by 37.84%.
arXiv Detail & Related papers (2025-02-23T11:19:44Z) - ALoRE: Efficient Visual Adaptation via Aggregating Low Rank Experts [71.91042186338163]
ALoRE is a novel PETL method that reuses the hypercomplex parameterized space constructed by Kronecker product to Aggregate Low Rank Experts.
Thanks to the artful design, ALoRE maintains negligible extra parameters and can be effortlessly merged into the frozen backbone.
arXiv Detail & Related papers (2024-12-11T12:31:30Z) - Visual Fourier Prompt Tuning [63.66866445034855]
We propose the Visual Fourier Prompt Tuning (VFPT) method as a general and effective solution for adapting large-scale transformer-based models.
Our approach incorporates the Fast Fourier Transform into prompt embeddings and harmoniously considers both spatial and frequency domain information.
Our results demonstrate that our approach outperforms current state-of-the-art baselines on two benchmarks.
arXiv Detail & Related papers (2024-11-02T18:18:35Z) - SaRA: High-Efficient Diffusion Model Fine-tuning with Progressive Sparse Low-Rank Adaptation [52.6922833948127]
In this work, we investigate the importance of parameters in pre-trained diffusion models.
We propose a novel model fine-tuning method to make full use of these ineffective parameters.
Our method enhances the generative capabilities of pre-trained models in downstream applications.
arXiv Detail & Related papers (2024-09-10T16:44:47Z) - Automated Design and Optimization of Distributed Filtering Circuits via Reinforcement Learning [20.500468654567033]
This study proposes a novel end-to-end automated method for DFC design.
The proposed method harnesses reinforcement learning (RL) algorithms, eliminating the dependence on the design experience of engineers.
Our method achieves superior performance when designing complex or rapidly evolving DFCs.
arXiv Detail & Related papers (2024-02-22T02:36:14Z) - Transfer-Learning-Based Autotuning Using Gaussian Copula [0.0]
We introduce the first generative TL-based autotuning approach based on the Gaussian copula (GC)
We find that the GC is capable of achieving 64.37% of peak few-shot performance in its first evaluation. Furthermore, the GC model can determine a few-shot transfer budget that yields up to 33.39$times$ speedup.
arXiv Detail & Related papers (2024-01-09T16:52:57Z) - Towards General and Efficient Online Tuning for Spark [55.30868031221838]
We present a general and efficient Spark tuning framework that can deal with the three issues simultaneously.
We have implemented this framework as an independent cloud service, and applied it to the data platform in Tencent.
arXiv Detail & Related papers (2023-09-05T02:16:45Z) - E^2VPT: An Effective and Efficient Approach for Visual Prompt Tuning [55.50908600818483]
Fine-tuning large-scale pretrained vision models for new tasks has become increasingly parameter-intensive.
We propose an Effective and Efficient Visual Prompt Tuning (E2VPT) approach for large-scale transformer-based model adaptation.
Our approach outperforms several state-of-the-art baselines on two benchmarks.
arXiv Detail & Related papers (2023-07-25T19:03:21Z) - Scaling & Shifting Your Features: A New Baseline for Efficient Model
Tuning [126.84770886628833]
Existing finetuning methods either tune all parameters of the pretrained model (full finetuning) or only tune the last linear layer (linear probing)
We propose a new parameter-efficient finetuning method termed as SSF, representing that researchers only need to Scale and Shift the deep Features extracted by a pre-trained model to catch up with the performance full finetuning.
arXiv Detail & Related papers (2022-10-17T08:14:49Z) - Hyperboost: Hyperparameter Optimization by Gradient Boosting surrogate
models [0.4079265319364249]
Current state-of-the-art methods leverage Random Forests or Gaussian processes to build a surrogate model.
We propose a new surrogate model based on gradient boosting.
We demonstrate empirically that the new method is able to outperform some state-of-the art techniques across a reasonable sized set of classification problems.
arXiv Detail & Related papers (2021-01-06T22:07:19Z) - Self-Tuning Stochastic Optimization with Curvature-Aware Gradient
Filtering [53.523517926927894]
We explore the use of exact per-sample Hessian-vector products and gradients to construct self-tuning quadratics.
We prove that our model-based procedure converges in noisy gradient setting.
This is an interesting step for constructing self-tuning quadratics.
arXiv Detail & Related papers (2020-11-09T22:07:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.