A Circuit Domain Generalization Framework for Efficient Logic Synthesis
in Chip Design
- URL: http://arxiv.org/abs/2309.03208v1
- Date: Tue, 22 Aug 2023 16:18:48 GMT
- Title: A Circuit Domain Generalization Framework for Efficient Logic Synthesis
in Chip Design
- Authors: Zhihai Wang, Lei Chen, Jie Wang, Xing Li, Yinqi Bai, Xijun Li,
Mingxuan Yuan, Jianye Hao, Yongdong Zhang, Feng Wu
- Abstract summary: A key task in Logic Synthesis (LS) is to transform circuits into simplified circuits with equivalent functionalities.
To tackle this task, many LS operators apply transformations to subgraphs -- rooted at each node on an input DAG -- sequentially.
We propose a novel data-driven LS operator paradigm, namely PruneX, to reduce ineffective transformations.
- Score: 92.63517027087933
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Logic Synthesis (LS) plays a vital role in chip design -- a cornerstone of
the semiconductor industry. A key task in LS is to transform circuits --
modeled by directed acyclic graphs (DAGs) -- into simplified circuits with
equivalent functionalities. To tackle this task, many LS operators apply
transformations to subgraphs -- rooted at each node on an input DAG --
sequentially. However, we found that a large number of transformations are
ineffective, which makes applying these operators highly time-consuming. In
particular, we notice that the runtime of the Resub and Mfs2 operators often
dominates the overall runtime of LS optimization processes. To address this
challenge, we propose a novel data-driven LS operator paradigm, namely PruneX,
to reduce ineffective transformations. The major challenge of developing PruneX
is to learn models that well generalize to unseen circuits, i.e., the
out-of-distribution (OOD) generalization problem. Thus, the major technical
contribution of PruneX is the novel circuit domain generalization framework,
which learns domain-invariant representations based on the
transformation-invariant domain-knowledge. To the best of our knowledge, PruneX
is the first approach to tackle the OOD problem in LS operators. We integrate
PruneX with the aforementioned Resub and Mfs2 operators. Experiments
demonstrate that PruneX significantly improves their efficiency while keeping
comparable optimization performance on industrial and very large-scale
circuits, achieving up to $3.1\times$ faster runtime.
Related papers
- On the Optimization and Generalization of Two-layer Transformers with Sign Gradient Descent [51.50999191584981]
Sign Gradient Descent (SignGD) serves as an effective surrogate for Adam.
We study how SignGD optimize a two-layer transformer on a noisy dataset.
We find that the poor generalization of SignGD is not solely due to data noise, suggesting that both SignGD and Adam requires high-quality data for real-world tasks.
arXiv Detail & Related papers (2024-10-07T09:36:43Z) - Logic Synthesis with Generative Deep Neural Networks [20.8279111910994]
We introduce a logic synthesis rewriting operator based on the Circuit Transformer model, named "ctrw" (Circuit Transformer Rewriting)
We propose two-stage training scheme for the Circuit Transformer tailored for logic, with iterative improvement of optimality through self-improvement training.
We also integrate the Circuit Transformer with state-of-the-art rewriting techniques to address scalability issues, allowing for guided DAG-aware rewriting.
arXiv Detail & Related papers (2024-06-07T07:16:40Z) - Quarl: A Learning-Based Quantum Circuit Optimizer [8.994999903946848]
This paper presents Quarl, a learning-based quantum circuit.
Applying reinforcement learning to quantum circuit optimization raises two main challenges: the large and varying action space and the non-uniform state representation.
arXiv Detail & Related papers (2023-07-17T19:21:22Z) - Transformers as Statisticians: Provable In-Context Learning with
In-Context Algorithm Selection [88.23337313766353]
This work first provides a comprehensive statistical theory for transformers to perform ICL.
We show that transformers can implement a broad class of standard machine learning algorithms in context.
A emphsingle transformer can adaptively select different base ICL algorithms.
arXiv Detail & Related papers (2023-06-07T17:59:31Z) - End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - INVICTUS: Optimizing Boolean Logic Circuit Synthesis via Synergistic
Learning and Search [18.558280701880136]
State-of-the-art logic synthesis algorithms have a large number of logic minimizations.
INVICTUS generates a sequence of logic minimizations based on a training dataset of previously seen designs.
arXiv Detail & Related papers (2023-05-22T15:50:42Z) - TAPIR: Learning Adaptive Revision for Incremental Natural Language
Understanding with a Two-Pass Model [14.846377138993645]
Recent neural network-based approaches for incremental processing mainly use RNNs or Transformers.
A restart-incremental interface that repeatedly passes longer input prefixes can be used to obtain partial outputs, while providing the ability to revise.
We propose the Two-pass model for AdaPtIve Revision (TAPIR) and introduce a method to obtain an incremental supervision signal for learning an adaptive revision policy.
arXiv Detail & Related papers (2023-05-18T09:58:19Z) - Graph Neural Network Autoencoders for Efficient Quantum Circuit
Optimisation [69.43216268165402]
We present for the first time how to use graph neural network (GNN) autoencoders for the optimisation of quantum circuits.
We construct directed acyclic graphs from the quantum circuits, encode the graphs and use the encodings to represent RL states.
Our method is the first realistic first step towards very large scale RL quantum circuit optimisation.
arXiv Detail & Related papers (2023-03-06T16:51:30Z) - Advancing Model Pruning via Bi-level Optimization [89.88761425199598]
iterative magnitude pruning (IMP) is the predominant pruning method to successfully find 'winning tickets'
One-shot pruning methods have been developed, but these schemes are usually unable to find winning tickets as good as IMP.
We show that the proposed bi-level optimization-oriented pruning method (termed BiP) is a special class of BLO problems with a bi-linear problem structure.
arXiv Detail & Related papers (2022-10-08T19:19:29Z) - Rethinking Reinforcement Learning based Logic Synthesis [13.18408482571087]
We develop a new RL-based method that can automatically recognize critical operators and generate common operator sequences generalizable to unseen circuits.
Our algorithm is verified on both the EPFL benchmark, a private dataset and a circuit at industrial scale.
arXiv Detail & Related papers (2022-05-16T12:15:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.