INVICTUS: Optimizing Boolean Logic Circuit Synthesis via Synergistic
Learning and Search
- URL: http://arxiv.org/abs/2305.13164v3
- Date: Mon, 5 Jun 2023 05:00:25 GMT
- Title: INVICTUS: Optimizing Boolean Logic Circuit Synthesis via Synergistic
Learning and Search
- Authors: Animesh Basak Chowdhury, Marco Romanelli, Benjamin Tan, Ramesh Karri,
Siddharth Garg
- Abstract summary: State-of-the-art logic synthesis algorithms have a large number of logic minimizations.
INVICTUS generates a sequence of logic minimizations based on a training dataset of previously seen designs.
- Score: 18.558280701880136
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Logic synthesis is the first and most vital step in chip design. This steps
converts a chip specification written in a hardware description language (such
as Verilog) into an optimized implementation using Boolean logic gates.
State-of-the-art logic synthesis algorithms have a large number of logic
minimization heuristics, typically applied sequentially based on human
experience and intuition. The choice of the order greatly impacts the quality
(e.g., area and delay) of the synthesized circuit. In this paper, we propose
INVICTUS, a model-based offline reinforcement learning (RL) solution that
automatically generates a sequence of logic minimization heuristics ("synthesis
recipe") based on a training dataset of previously seen designs. A key
challenge is that new designs can range from being very similar to past designs
(e.g., adders and multipliers) to being completely novel (e.g., new processor
instructions). %Compared to prior work, INVICTUS is the first solution that
uses a mix of RL and search methods joint with an online out-of-distribution
detector to generate synthesis recipes over a wide range of benchmarks. Our
results demonstrate significant improvement in area-delay product (ADP) of
synthesized circuits with up to 30\% improvement over state-of-the-art
techniques. Moreover, INVICTUS achieves up to $6.3\times$ runtime reduction
(iso-ADP) compared to the state-of-the-art.
Related papers
- Retrieval-Guided Reinforcement Learning for Boolean Circuit Minimization [23.075466444266528]
This study conducts a thorough examination of learning and search techniques for logic synthesis.
We present ABC-RL, a meticulously tuned $alpha$ parameter that adeptly adjusts recommendations from pre-trained agents during the search process.
Our findings showcase substantial enhancements in the Quality-of-result (QoR) of synthesized circuits, boasting improvements of up to 24.8% compared to state-of-the-art techniques.
arXiv Detail & Related papers (2024-01-22T18:46:30Z) - A Circuit Domain Generalization Framework for Efficient Logic Synthesis
in Chip Design [92.63517027087933]
A key task in Logic Synthesis (LS) is to transform circuits into simplified circuits with equivalent functionalities.
To tackle this task, many LS operators apply transformations to subgraphs -- rooted at each node on an input DAG -- sequentially.
We propose a novel data-driven LS operator paradigm, namely PruneX, to reduce ineffective transformations.
arXiv Detail & Related papers (2023-08-22T16:18:48Z) - End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - AISYN: AI-driven Reinforcement Learning-Based Logic Synthesis Framework [0.8356765961526955]
We believe that Artificial Intelligence (AI) and Reinforcement Learning (RL) algorithms can help in solving this problem.
Our experiments on both open source and industrial benchmark circuits show that significant improvements on important metrics such as area, delay, and power can be achieved by making logic synthesis optimization functions AI-driven.
arXiv Detail & Related papers (2023-02-08T00:55:24Z) - CodeRL: Mastering Code Generation through Pretrained Models and Deep
Reinforcement Learning [92.36705236706678]
"CodeRL" is a new framework for program synthesis tasks through pretrained LMs and deep reinforcement learning.
During inference, we introduce a new generation procedure with a critical sampling strategy.
For the model backbones, we extended the encoder-decoder architecture of CodeT5 with enhanced learning objectives.
arXiv Detail & Related papers (2022-07-05T02:42:15Z) - Hybrid Graph Models for Logic Optimization via Spatio-Temporal
Information [15.850413267830522]
Two major concerns that may impede production-ready ML applications in EDA are accuracy requirements and generalization capability.
We propose hybrid graph neural network (GNN) based approaches towards highly accurate quality-of-result (QoR) estimations.
Evaluation on 3.3 million data points shows that the testing mean absolute percentage error (MAPE) on designs seen unseen during training are no more than 1.2% and 3.1%.
arXiv Detail & Related papers (2022-01-20T21:12:22Z) - BOiLS: Bayesian Optimisation for Logic Synthesis [10.981155046738126]
We propose BOiLS, the first algorithm adapting modern Bayesian optimisation to navigate the space of synthesis operations.
We demonstrate BOiLS's superior performance compared to state-of-the-art in terms of both sample efficiency and QoR values.
arXiv Detail & Related papers (2021-11-11T12:44:38Z) - Neural Circuit Synthesis from Specification Patterns [5.7923858184309385]
We train hierarchical Transformers on the task of synthesizing hardware circuits directly out of high-level logical specifications.
New approaches using machine learning might open a lot of possibilities in this area, but suffer from the lack of sufficient amounts of training data.
We show that hierarchical Transformers trained on this synthetic data solve a significant portion of problems from the synthesis competitions.
arXiv Detail & Related papers (2021-07-25T18:17:33Z) - Transformer-based Machine Learning for Fast SAT Solvers and Logic
Synthesis [63.53283025435107]
CNF-based SAT and MaxSAT solvers are central to logic synthesis and verification systems.
In this work, we propose a one-shot model derived from the Transformer architecture to solve the MaxSAT problem.
arXiv Detail & Related papers (2021-07-15T04:47:35Z) - An AI-Assisted Design Method for Topology Optimization Without
Pre-Optimized Training Data [68.8204255655161]
An AI-assisted design method based on topology optimization is presented, which is able to obtain optimized designs in a direct way.
Designs are provided by an artificial neural network, the predictor, on the basis of boundary conditions and degree of filling as input data.
arXiv Detail & Related papers (2020-12-11T14:33:27Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.