Rethinking Reinforcement Learning based Logic Synthesis
- URL: http://arxiv.org/abs/2205.07614v1
- Date: Mon, 16 May 2022 12:15:32 GMT
- Title: Rethinking Reinforcement Learning based Logic Synthesis
- Authors: Chao Wang, Chen Chen, Dong Li, Bin Wang
- Abstract summary: We develop a new RL-based method that can automatically recognize critical operators and generate common operator sequences generalizable to unseen circuits.
Our algorithm is verified on both the EPFL benchmark, a private dataset and a circuit at industrial scale.
- Score: 13.18408482571087
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, reinforcement learning has been used to address logic synthesis by
formulating the operator sequence optimization problem as a Markov decision
process. However, through extensive experiments, we find out that the learned
policy makes decisions independent from the circuit features (i.e., states) and
yields an operator sequence that is permutation invariant to some extent in
terms of operators. Based on these findings, we develop a new RL-based method
that can automatically recognize critical operators and generate common
operator sequences generalizable to unseen circuits. Our algorithm is verified
on both the EPFL benchmark, a private dataset and a circuit at industrial
scale. Experimental results demonstrate that it achieves a good balance among
delay, area and runtime, and is practical for industrial usage.
Related papers
- Parameterized Projected Bellman Operator [64.129598593852]
Approximate value iteration (AVI) is a family of algorithms for reinforcement learning (RL)
We propose a novel alternative approach based on learning an approximate version of the Bellman operator.
We formulate an optimization problem to learn PBO for generic sequential decision-making problems.
arXiv Detail & Related papers (2023-12-20T09:33:16Z) - A Circuit Domain Generalization Framework for Efficient Logic Synthesis
in Chip Design [92.63517027087933]
A key task in Logic Synthesis (LS) is to transform circuits into simplified circuits with equivalent functionalities.
To tackle this task, many LS operators apply transformations to subgraphs -- rooted at each node on an input DAG -- sequentially.
We propose a novel data-driven LS operator paradigm, namely PruneX, to reduce ineffective transformations.
arXiv Detail & Related papers (2023-08-22T16:18:48Z) - Transformers as Statisticians: Provable In-Context Learning with
In-Context Algorithm Selection [88.23337313766353]
This work first provides a comprehensive statistical theory for transformers to perform ICL.
We show that transformers can implement a broad class of standard machine learning algorithms in context.
A emphsingle transformer can adaptively select different base ICL algorithms.
arXiv Detail & Related papers (2023-06-07T17:59:31Z) - End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - INVICTUS: Optimizing Boolean Logic Circuit Synthesis via Synergistic
Learning and Search [18.558280701880136]
State-of-the-art logic synthesis algorithms have a large number of logic minimizations.
INVICTUS generates a sequence of logic minimizations based on a training dataset of previously seen designs.
arXiv Detail & Related papers (2023-05-22T15:50:42Z) - Self Optimisation and Automatic Code Generation by Evolutionary
Algorithms in PLC based Controlling Processes [0.0]
A novel approach based on evolutionary algorithms is proposed to self optimise the system logic of complex processes.
The presented approach is evaluated on an industrial liquid station process subject to a multi-objective problem.
arXiv Detail & Related papers (2023-04-12T06:36:54Z) - Neural Combinatorial Logic Circuit Synthesis from Input-Output Examples [10.482805367361818]
We propose a novel, fully explainable neural approach to synthesis of logic circuits from input-output examples.
Our method can be employed for a virtually arbitrary choice of atoms.
arXiv Detail & Related papers (2022-10-29T14:06:42Z) - Transformer for Partial Differential Equations' Operator Learning [0.0]
We present an attention-based framework for data-driven operator learning, which we term Operator Transformer (OFormer)
Our framework is built upon self-attention, cross-attention, and a set of point-wise multilayer perceptrons (MLPs)
arXiv Detail & Related papers (2022-05-26T23:17:53Z) - Gradient-Based Learning of Discrete Structured Measurement Operators for
Signal Recovery [16.740247586153085]
We show how to leverage gradient-based learning to solve discrete optimization problems.
Our approach is formalized by GLODISMO (Gradient-based Learning of DIscrete Structured Measurement Operators)
We empirically demonstrate the performance and flexibility of GLODISMO in several signal recovery applications.
arXiv Detail & Related papers (2022-02-07T18:27:08Z) - Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks [65.23508422635862]
We propose learning rules with the recently proposed logical neural networks (LNN)
Compared to others, LNNs offer strong connection to classical Boolean logic.
Our experiments on standard benchmarking tasks confirm that LNN rules are highly interpretable.
arXiv Detail & Related papers (2021-12-06T19:38:30Z) - Certified Reinforcement Learning with Logic Guidance [78.2286146954051]
We propose a model-free RL algorithm that enables the use of Linear Temporal Logic (LTL) to formulate a goal for unknown continuous-state/action Markov Decision Processes (MDPs)
The algorithm is guaranteed to synthesise a control policy whose traces satisfy the specification with maximal probability.
arXiv Detail & Related papers (2019-02-02T20:09:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.