Logic Synthesis with Generative Deep Neural Networks
- URL: http://arxiv.org/abs/2406.04699v1
- Date: Fri, 7 Jun 2024 07:16:40 GMT
- Title: Logic Synthesis with Generative Deep Neural Networks
- Authors: Xihan Li, Xing Li, Lei Chen, Xing Zhang, Mingxuan Yuan, Jun Wang,
- Abstract summary: We introduce a logic synthesis rewriting operator based on the Circuit Transformer model, named "ctrw" (Circuit Transformer Rewriting)
We propose two-stage training scheme for the Circuit Transformer tailored for logic, with iterative improvement of optimality through self-improvement training.
We also integrate the Circuit Transformer with state-of-the-art rewriting techniques to address scalability issues, allowing for guided DAG-aware rewriting.
- Score: 20.8279111910994
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While deep learning has achieved significant success in various domains, its application to logic circuit design has been limited due to complex constraints and strict feasibility requirement. However, a recent generative deep neural model, "Circuit Transformer", has shown promise in this area by enabling equivalence-preserving circuit transformation on a small scale. In this paper, we introduce a logic synthesis rewriting operator based on the Circuit Transformer model, named "ctrw" (Circuit Transformer Rewriting), which incorporates the following techniques: (1) a two-stage training scheme for the Circuit Transformer tailored for logic synthesis, with iterative improvement of optimality through self-improvement training; (2) integration of the Circuit Transformer with state-of-the-art rewriting techniques to address scalability issues, allowing for guided DAG-aware rewriting. Experimental results on the IWLS 2023 contest benchmark demonstrate the effectiveness of our proposed rewriting methods.
Related papers
- CIRCUITSYNTH: Leveraging Large Language Models for Circuit Topology Synthesis [7.131266114437393]
We introduce CIRCUITSYNTH, a novel approach that harnesses LLMs to facilitate the automated synthesis of valid circuit topologies.
Our approach lays the foundation for future research aimed at enhancing circuit efficiency and specifying output voltage.
arXiv Detail & Related papers (2024-06-06T01:59:59Z) - Circuit Transformer: End-to-end Circuit Design by Predicting the Next Gate [20.8279111910994]
Language, a prominent human ability to express through sequential symbols, has been computationally mastered by recent advances of large language models (LLMs)
LLMs have shown unprecedented capabilities in understanding and reasoning.
Can circuits also be mastered by a a sufficiently large "circuit model", which can conquer electronic design tasks by simply predicting the next logic gate?
arXiv Detail & Related papers (2024-03-14T03:24:14Z) - Uncovering mesa-optimization algorithms in Transformers [61.06055590704677]
Some autoregressive models can learn as an input sequence is processed, without undergoing any parameter changes, and without being explicitly trained to do so.
We show that standard next-token prediction error minimization gives rise to a subsidiary learning algorithm that adjusts the model as new inputs are revealed.
Our findings explain in-context learning as a product of autoregressive loss minimization and inform the design of new optimization-based Transformer layers.
arXiv Detail & Related papers (2023-09-11T22:42:50Z) - A Circuit Domain Generalization Framework for Efficient Logic Synthesis
in Chip Design [92.63517027087933]
A key task in Logic Synthesis (LS) is to transform circuits into simplified circuits with equivalent functionalities.
To tackle this task, many LS operators apply transformations to subgraphs -- rooted at each node on an input DAG -- sequentially.
We propose a novel data-driven LS operator paradigm, namely PruneX, to reduce ineffective transformations.
arXiv Detail & Related papers (2023-08-22T16:18:48Z) - Transformers as Statisticians: Provable In-Context Learning with
In-Context Algorithm Selection [88.23337313766353]
This work first provides a comprehensive statistical theory for transformers to perform ICL.
We show that transformers can implement a broad class of standard machine learning algorithms in context.
A emphsingle transformer can adaptively select different base ICL algorithms.
arXiv Detail & Related papers (2023-06-07T17:59:31Z) - End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - INVICTUS: Optimizing Boolean Logic Circuit Synthesis via Synergistic
Learning and Search [18.558280701880136]
State-of-the-art logic synthesis algorithms have a large number of logic minimizations.
INVICTUS generates a sequence of logic minimizations based on a training dataset of previously seen designs.
arXiv Detail & Related papers (2023-05-22T15:50:42Z) - Full Stack Optimization of Transformer Inference: a Survey [58.55475772110702]
Transformer models achieve superior accuracy across a wide range of applications.
The amount of compute and bandwidth required for inference of recent Transformer models is growing at a significant rate.
There has been an increased focus on making Transformer models more efficient.
arXiv Detail & Related papers (2023-02-27T18:18:13Z) - Exploiting Transformer in Sparse Reward Reinforcement Learning for
Interpretable Temporal Logic Motion Planning [9.801466218905604]
Automaton based algorithms rely on the manually customized representation of states for the considered task.
We develop a Double-Transformer-guided Temporal Logic framework (T2TL) that exploits the structural feature of Transformer twice.
As a semantics, progression is exploited to decompose the complex task into learnable sub-goals.
arXiv Detail & Related papers (2022-09-27T07:41:11Z) - The prediction of the quality of results in Logic Synthesis using
Transformer and Graph Neural Networks [7.194397166633194]
We propose a deep learning method to predict the quality of circuit-optimization sequences pairs.
The Transformer and three typical GNNs are used as a joint learning policy for the QoR prediction of the unseen circuit-optimization sequences.
The experimental results show that the joint learning of Transformer and GraphSage gives the best results.
arXiv Detail & Related papers (2022-07-23T06:53:57Z) - CSformer: Bridging Convolution and Transformer for Compressive Sensing [65.22377493627687]
This paper proposes a hybrid framework that integrates the advantages of leveraging detailed spatial information from CNN and the global context provided by transformer for enhanced representation learning.
The proposed approach is an end-to-end compressive image sensing method, composed of adaptive sampling and recovery.
The experimental results demonstrate the effectiveness of the dedicated transformer-based architecture for compressive sensing.
arXiv Detail & Related papers (2021-12-31T04:37:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.