INVICTUS: Optimizing Boolean Logic Circuit Synthesis via Synergistic
Learning and Search
- URL: http://arxiv.org/abs/2305.13164v3
- Date: Mon, 5 Jun 2023 05:00:25 GMT
- Title: INVICTUS: Optimizing Boolean Logic Circuit Synthesis via Synergistic
Learning and Search
- Authors: Animesh Basak Chowdhury, Marco Romanelli, Benjamin Tan, Ramesh Karri,
Siddharth Garg
- Abstract summary: State-of-the-art logic synthesis algorithms have a large number of logic minimizations.
INVICTUS generates a sequence of logic minimizations based on a training dataset of previously seen designs.
- Score: 18.558280701880136
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Logic synthesis is the first and most vital step in chip design. This steps
converts a chip specification written in a hardware description language (such
as Verilog) into an optimized implementation using Boolean logic gates.
State-of-the-art logic synthesis algorithms have a large number of logic
minimization heuristics, typically applied sequentially based on human
experience and intuition. The choice of the order greatly impacts the quality
(e.g., area and delay) of the synthesized circuit. In this paper, we propose
INVICTUS, a model-based offline reinforcement learning (RL) solution that
automatically generates a sequence of logic minimization heuristics ("synthesis
recipe") based on a training dataset of previously seen designs. A key
challenge is that new designs can range from being very similar to past designs
(e.g., adders and multipliers) to being completely novel (e.g., new processor
instructions). %Compared to prior work, INVICTUS is the first solution that
uses a mix of RL and search methods joint with an online out-of-distribution
detector to generate synthesis recipes over a wide range of benchmarks. Our
results demonstrate significant improvement in area-delay product (ADP) of
synthesized circuits with up to 30\% improvement over state-of-the-art
techniques. Moreover, INVICTUS achieves up to $6.3\times$ runtime reduction
(iso-ADP) compared to the state-of-the-art.
Related papers
- The Graph's Apprentice: Teaching an LLM Low Level Knowledge for Circuit Quality Estimation [34.37154877681809]
We introduce VeriDistill, the first end-to-end machine learning model that directly processes raw Verilog code to predict circuit quality-of-result metrics.
Our model employs a novel knowledge distillation method, transferring low-level circuit insights via graphs into the predictor based on LLM.
Experiments show VeriDistill outperforms state-of-the-art baselines on large-scale Verilog datasets.
arXiv Detail & Related papers (2024-10-30T04:20:10Z) - Logic Synthesis Optimization with Predictive Self-Supervision via Causal Transformers [19.13500546022262]
We introduce LSOformer, a novel approach harnessing Autoregressive transformer models and predictive SSL to predict the trajectory of Quality of Results (QoR)
LSOformer integrates cross-attention modules to merge insights from circuit graphs and optimization sequences, thereby enhancing prediction accuracy for QoR metrics.
arXiv Detail & Related papers (2024-09-16T18:45:07Z) - Benchmarking End-To-End Performance of AI-Based Chip Placement Algorithms [77.71341200638416]
ChiPBench is a benchmark designed to evaluate the effectiveness of AI-based chip placement algorithms.
We have gathered 20 circuits from various domains (e.g., CPU, GPU, and microcontrollers) for evaluation.
Results show that even if intermediate metric of a single-point algorithm is dominant, the final PPA results are unsatisfactory.
arXiv Detail & Related papers (2024-07-03T03:29:23Z) - Retrieval-Guided Reinforcement Learning for Boolean Circuit Minimization [23.075466444266528]
This study conducts a thorough examination of learning and search techniques for logic synthesis.
We present ABC-RL, a meticulously tuned $alpha$ parameter that adeptly adjusts recommendations from pre-trained agents during the search process.
Our findings showcase substantial enhancements in the Quality-of-result (QoR) of synthesized circuits, boasting improvements of up to 24.8% compared to state-of-the-art techniques.
arXiv Detail & Related papers (2024-01-22T18:46:30Z) - Uncovering mesa-optimization algorithms in Transformers [61.06055590704677]
Some autoregressive models can learn as an input sequence is processed, without undergoing any parameter changes, and without being explicitly trained to do so.
We show that standard next-token prediction error minimization gives rise to a subsidiary learning algorithm that adjusts the model as new inputs are revealed.
Our findings explain in-context learning as a product of autoregressive loss minimization and inform the design of new optimization-based Transformer layers.
arXiv Detail & Related papers (2023-09-11T22:42:50Z) - A Circuit Domain Generalization Framework for Efficient Logic Synthesis
in Chip Design [92.63517027087933]
A key task in Logic Synthesis (LS) is to transform circuits into simplified circuits with equivalent functionalities.
To tackle this task, many LS operators apply transformations to subgraphs -- rooted at each node on an input DAG -- sequentially.
We propose a novel data-driven LS operator paradigm, namely PruneX, to reduce ineffective transformations.
arXiv Detail & Related papers (2023-08-22T16:18:48Z) - End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - AISYN: AI-driven Reinforcement Learning-Based Logic Synthesis Framework [0.8356765961526955]
We believe that Artificial Intelligence (AI) and Reinforcement Learning (RL) algorithms can help in solving this problem.
Our experiments on both open source and industrial benchmark circuits show that significant improvements on important metrics such as area, delay, and power can be achieved by making logic synthesis optimization functions AI-driven.
arXiv Detail & Related papers (2023-02-08T00:55:24Z) - CodeRL: Mastering Code Generation through Pretrained Models and Deep
Reinforcement Learning [92.36705236706678]
"CodeRL" is a new framework for program synthesis tasks through pretrained LMs and deep reinforcement learning.
During inference, we introduce a new generation procedure with a critical sampling strategy.
For the model backbones, we extended the encoder-decoder architecture of CodeT5 with enhanced learning objectives.
arXiv Detail & Related papers (2022-07-05T02:42:15Z) - Hybrid Graph Models for Logic Optimization via Spatio-Temporal
Information [15.850413267830522]
Two major concerns that may impede production-ready ML applications in EDA are accuracy requirements and generalization capability.
We propose hybrid graph neural network (GNN) based approaches towards highly accurate quality-of-result (QoR) estimations.
Evaluation on 3.3 million data points shows that the testing mean absolute percentage error (MAPE) on designs seen unseen during training are no more than 1.2% and 3.1%.
arXiv Detail & Related papers (2022-01-20T21:12:22Z) - Transformer-based Machine Learning for Fast SAT Solvers and Logic
Synthesis [63.53283025435107]
CNF-based SAT and MaxSAT solvers are central to logic synthesis and verification systems.
In this work, we propose a one-shot model derived from the Transformer architecture to solve the MaxSAT problem.
arXiv Detail & Related papers (2021-07-15T04:47:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.