AISYN: AI-driven Reinforcement Learning-Based Logic Synthesis Framework
- URL: http://arxiv.org/abs/2302.06415v1
- Date: Wed, 8 Feb 2023 00:55:24 GMT
- Title: AISYN: AI-driven Reinforcement Learning-Based Logic Synthesis Framework
- Authors: Ghasem Pasandi and Sreedhar Pratty and James Forsyth
- Abstract summary: We believe that Artificial Intelligence (AI) and Reinforcement Learning (RL) algorithms can help in solving this problem.
Our experiments on both open source and industrial benchmark circuits show that significant improvements on important metrics such as area, delay, and power can be achieved by making logic synthesis optimization functions AI-driven.
- Score: 0.8356765961526955
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Logic synthesis is one of the most important steps in design and
implementation of digital chips with a big impact on final Quality of Results
(QoR). For a most general input circuit modeled by a Directed Acyclic Graph
(DAG), many logic synthesis problems such as delay or area minimization are
NP-Complete, hence, no optimal solution is available. This is why many
classical logic optimization functions tend to follow greedy approaches that
are easily trapped in local minima that does not allow improving QoR as much as
needed. We believe that Artificial Intelligence (AI) and more specifically
Reinforcement Learning (RL) algorithms can help in solving this problem. This
is because AI and RL can help minimizing QoR further by exiting from local
minima. Our experiments on both open source and industrial benchmark circuits
show that significant improvements on important metrics such as area, delay,
and power can be achieved by making logic synthesis optimization functions
AI-driven. For example, our RL-based rewriting algorithm could improve total
cell area post-synthesis by up to 69.3% when compared to a classical rewriting
algorithm with no AI awareness.
Related papers
- Retrieval-Guided Reinforcement Learning for Boolean Circuit Minimization [23.075466444266528]
This study conducts a thorough examination of learning and search techniques for logic synthesis.
We present ABC-RL, a meticulously tuned $alpha$ parameter that adeptly adjusts recommendations from pre-trained agents during the search process.
Our findings showcase substantial enhancements in the Quality-of-result (QoR) of synthesized circuits, boasting improvements of up to 24.8% compared to state-of-the-art techniques.
arXiv Detail & Related papers (2024-01-22T18:46:30Z) - INVICTUS: Optimizing Boolean Logic Circuit Synthesis via Synergistic
Learning and Search [18.558280701880136]
State-of-the-art logic synthesis algorithms have a large number of logic minimizations.
INVICTUS generates a sequence of logic minimizations based on a training dataset of previously seen designs.
arXiv Detail & Related papers (2023-05-22T15:50:42Z) - Transformer-based Machine Learning for Fast SAT Solvers and Logic
Synthesis [63.53283025435107]
CNF-based SAT and MaxSAT solvers are central to logic synthesis and verification systems.
In this work, we propose a one-shot model derived from the Transformer architecture to solve the MaxSAT problem.
arXiv Detail & Related papers (2021-07-15T04:47:35Z) - PAC-learning gains of Turing machines over circuits and neural networks [1.4502611532302039]
We study the potential gains in sample efficiency that can bring in the principle of minimum description length.
We use Turing machines to represent universal models and circuits.
We highlight close relationships between classical open problems in Circuit Complexity and the tightness of these.
arXiv Detail & Related papers (2021-03-23T17:03:10Z) - Differentiable Logic Machines [38.21461039738474]
We propose a novel neural-logic architecture, called differentiable logic machine (DLM)
DLM can solve both inductive logic programming (ILP) and reinforcement learning (RL) problems.
On RL problems, without requiring an interpretable solution, DLM outperforms other non-interpretable neural-logic RL approaches.
arXiv Detail & Related papers (2021-02-23T07:31:52Z) - GradInit: Learning to Initialize Neural Networks for Stable and
Efficient Training [59.160154997555956]
We present GradInit, an automated and architecture method for initializing neural networks.
It is based on a simple agnostic; the variance of each network layer is adjusted so that a single step of SGD or Adam results in the smallest possible loss value.
It also enables training the original Post-LN Transformer for machine translation without learning rate warmup.
arXiv Detail & Related papers (2021-02-16T11:45:35Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Activation Relaxation: A Local Dynamical Approximation to
Backpropagation in the Brain [62.997667081978825]
Activation Relaxation (AR) is motivated by constructing the backpropagation gradient as the equilibrium point of a dynamical system.
Our algorithm converges rapidly and robustly to the correct backpropagation gradients, requires only a single type of computational unit, and can operate on arbitrary computation graphs.
arXiv Detail & Related papers (2020-09-11T11:56:34Z) - Predictive Coding Approximates Backprop along Arbitrary Computation
Graphs [68.8204255655161]
We develop a strategy to translate core machine learning architectures into their predictive coding equivalents.
Our models perform equivalently to backprop on challenging machine learning benchmarks.
Our method raises the potential that standard machine learning algorithms could in principle be directly implemented in neural circuitry.
arXiv Detail & Related papers (2020-06-07T15:35:47Z) - Towards Neural-Guided Program Synthesis for Linear Temporal Logic
Specifications [26.547133495699093]
We use a neural network to learn a Q-function that is then used to guide search, and to construct programs that are subsequently verified for correctness.
Our method is unique in combining search with deep learning to realize synthesis.
arXiv Detail & Related papers (2019-12-31T17:09:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.