Retrieval-Guided Reinforcement Learning for Boolean Circuit Minimization
- URL: http://arxiv.org/abs/2401.12205v1
- Date: Mon, 22 Jan 2024 18:46:30 GMT
- Title: Retrieval-Guided Reinforcement Learning for Boolean Circuit Minimization
- Authors: Animesh Basak Chowdhury, Marco Romanelli, Benjamin Tan, Ramesh Karri,
Siddharth Garg
- Abstract summary: This study conducts a thorough examination of learning and search techniques for logic synthesis.
We present ABC-RL, a meticulously tuned $alpha$ parameter that adeptly adjusts recommendations from pre-trained agents during the search process.
Our findings showcase substantial enhancements in the Quality-of-result (QoR) of synthesized circuits, boasting improvements of up to 24.8% compared to state-of-the-art techniques.
- Score: 23.075466444266528
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Logic synthesis, a pivotal stage in chip design, entails optimizing chip
specifications encoded in hardware description languages like Verilog into
highly efficient implementations using Boolean logic gates. The process
involves a sequential application of logic minimization heuristics (``synthesis
recipe"), with their arrangement significantly impacting crucial metrics such
as area and delay. Addressing the challenge posed by the broad spectrum of
design complexities - from variations of past designs (e.g., adders and
multipliers) to entirely novel configurations (e.g., innovative processor
instructions) - requires a nuanced `synthesis recipe` guided by human expertise
and intuition. This study conducts a thorough examination of learning and
search techniques for logic synthesis, unearthing a surprising revelation:
pre-trained agents, when confronted with entirely novel designs, may veer off
course, detrimentally affecting the search trajectory. We present ABC-RL, a
meticulously tuned $\alpha$ parameter that adeptly adjusts recommendations from
pre-trained agents during the search process. Computed based on similarity
scores through nearest neighbor retrieval from the training dataset, ABC-RL
yields superior synthesis recipes tailored for a wide array of hardware
designs. Our findings showcase substantial enhancements in the
Quality-of-result (QoR) of synthesized circuits, boasting improvements of up to
24.8% compared to state-of-the-art techniques. Furthermore, ABC-RL achieves an
impressive up to 9x reduction in runtime (iso-QoR) when compared to current
state-of-the-art methodologies.
Related papers
- The Graph's Apprentice: Teaching an LLM Low Level Knowledge for Circuit Quality Estimation [34.37154877681809]
We introduce VeriDistill, the first end-to-end machine learning model that directly processes raw Verilog code to predict circuit quality-of-result metrics.
Our model employs a novel knowledge distillation method, transferring low-level circuit insights via graphs into the predictor based on LLM.
Experiments show VeriDistill outperforms state-of-the-art baselines on large-scale Verilog datasets.
arXiv Detail & Related papers (2024-10-30T04:20:10Z) - Logic Synthesis Optimization with Predictive Self-Supervision via Causal Transformers [19.13500546022262]
We introduce LSOformer, a novel approach harnessing Autoregressive transformer models and predictive SSL to predict the trajectory of Quality of Results (QoR)
LSOformer integrates cross-attention modules to merge insights from circuit graphs and optimization sequences, thereby enhancing prediction accuracy for QoR metrics.
arXiv Detail & Related papers (2024-09-16T18:45:07Z) - Uncovering mesa-optimization algorithms in Transformers [61.06055590704677]
Some autoregressive models can learn as an input sequence is processed, without undergoing any parameter changes, and without being explicitly trained to do so.
We show that standard next-token prediction error minimization gives rise to a subsidiary learning algorithm that adjusts the model as new inputs are revealed.
Our findings explain in-context learning as a product of autoregressive loss minimization and inform the design of new optimization-based Transformer layers.
arXiv Detail & Related papers (2023-09-11T22:42:50Z) - End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - INVICTUS: Optimizing Boolean Logic Circuit Synthesis via Synergistic
Learning and Search [18.558280701880136]
State-of-the-art logic synthesis algorithms have a large number of logic minimizations.
INVICTUS generates a sequence of logic minimizations based on a training dataset of previously seen designs.
arXiv Detail & Related papers (2023-05-22T15:50:42Z) - AISYN: AI-driven Reinforcement Learning-Based Logic Synthesis Framework [0.8356765961526955]
We believe that Artificial Intelligence (AI) and Reinforcement Learning (RL) algorithms can help in solving this problem.
Our experiments on both open source and industrial benchmark circuits show that significant improvements on important metrics such as area, delay, and power can be achieved by making logic synthesis optimization functions AI-driven.
arXiv Detail & Related papers (2023-02-08T00:55:24Z) - CorpusBrain: Pre-train a Generative Retrieval Model for
Knowledge-Intensive Language Tasks [62.22920673080208]
Single-step generative model can dramatically simplify the search process and be optimized in end-to-end manner.
We name the pre-trained generative retrieval model as CorpusBrain as all information about the corpus is encoded in its parameters without the need of constructing additional index.
arXiv Detail & Related papers (2022-08-16T10:22:49Z) - CodeRL: Mastering Code Generation through Pretrained Models and Deep
Reinforcement Learning [92.36705236706678]
"CodeRL" is a new framework for program synthesis tasks through pretrained LMs and deep reinforcement learning.
During inference, we introduce a new generation procedure with a critical sampling strategy.
For the model backbones, we extended the encoder-decoder architecture of CodeT5 with enhanced learning objectives.
arXiv Detail & Related papers (2022-07-05T02:42:15Z) - BOiLS: Bayesian Optimisation for Logic Synthesis [10.981155046738126]
We propose BOiLS, the first algorithm adapting modern Bayesian optimisation to navigate the space of synthesis operations.
We demonstrate BOiLS's superior performance compared to state-of-the-art in terms of both sample efficiency and QoR values.
arXiv Detail & Related papers (2021-11-11T12:44:38Z) - An AI-Assisted Design Method for Topology Optimization Without
Pre-Optimized Training Data [68.8204255655161]
An AI-assisted design method based on topology optimization is presented, which is able to obtain optimized designs in a direct way.
Designs are provided by an artificial neural network, the predictor, on the basis of boundary conditions and degree of filling as input data.
arXiv Detail & Related papers (2020-12-11T14:33:27Z) - Retro*: Learning Retrosynthetic Planning with Neural Guided A* Search [83.22850633478302]
Retrosynthetic planning identifies a series of reactions that can lead to the synthesis of a target product.
Existing methods either require expensive return estimation by rollout with high variance, or optimize for search speed rather than the quality.
We propose Retro*, a neural-based A*-like algorithm that finds high-quality synthetic routes efficiently.
arXiv Detail & Related papers (2020-06-29T05:53:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.