The prediction of the quality of results in Logic Synthesis using
Transformer and Graph Neural Networks
- URL: http://arxiv.org/abs/2207.11437v2
- Date: Mon, 21 Aug 2023 15:00:03 GMT
- Title: The prediction of the quality of results in Logic Synthesis using
Transformer and Graph Neural Networks
- Authors: Chenghao Yang, Zhongda Wang, Yinshui Xia, Zhufei Chu
- Abstract summary: We propose a deep learning method to predict the quality of circuit-optimization sequences pairs.
The Transformer and three typical GNNs are used as a joint learning policy for the QoR prediction of the unseen circuit-optimization sequences.
The experimental results show that the joint learning of Transformer and GraphSage gives the best results.
- Score: 7.194397166633194
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the logic synthesis stage, structure transformations in the synthesis tool
need to be combined into optimization sequences and act on the circuit to meet
the specified circuit area and delay. However, logic synthesis optimization
sequences are time-consuming to run, and predicting the quality of the results
(QoR) against the synthesis optimization sequence for a circuit can help
engineers find a better optimization sequence faster. In this work, we propose
a deep learning method to predict the QoR of unseen circuit-optimization
sequences pairs. Specifically, the structure transformations are translated
into vectors by embedding methods and advanced natural language processing
(NLP) technology (Transformer) is used to extract the features of the
optimization sequences. In addition, to enable the prediction process of the
model to be generalized from circuit to circuit, the graph representation of
the circuit is represented as an adjacency matrix and a feature matrix. Graph
neural networks(GNN) are used to extract the structural features of the
circuits. For this problem, the Transformer and three typical GNNs are used.
Furthermore, the Transformer and GNNs are adopted as a joint learning policy
for the QoR prediction of the unseen circuit-optimization sequences. The
methods resulting from the combination of Transformer and GNNs are benchmarked.
The experimental results show that the joint learning of Transformer and
GraphSage gives the best results. The Mean Absolute Error (MAE) of the
predicted result is 0.412.
Related papers
- Variable-size Symmetry-based Graph Fourier Transforms for image compression [65.7352685872625]
We propose a new family of Symmetry-based Graph Fourier Transforms of variable sizes into a coding framework.
Our proposed algorithm generates symmetric graphs on the grid by adding specific symmetrical connections between nodes.
Experiments show that SBGFTs outperform the primary transforms integrated in the explicit Multiple Transform Selection.
arXiv Detail & Related papers (2024-11-24T13:00:44Z) - Transformers meet Neural Algorithmic Reasoners [16.5785372289558]
We propose a novel approach that combines the Transformer's language understanding with the robustness of graph neural network (GNN)-based neural algorithmic reasoners (NARs)
We evaluate our resulting TransNAR model on CLRS-Text, the text-based version of the CLRS-30 benchmark, and demonstrate significant gains over Transformer-only models for algorithmic reasoning.
arXiv Detail & Related papers (2024-06-13T16:42:06Z) - Logic Synthesis with Generative Deep Neural Networks [20.8279111910994]
We introduce a logic synthesis rewriting operator based on the Circuit Transformer model, named "ctrw" (Circuit Transformer Rewriting)
We propose two-stage training scheme for the Circuit Transformer tailored for logic, with iterative improvement of optimality through self-improvement training.
We also integrate the Circuit Transformer with state-of-the-art rewriting techniques to address scalability issues, allowing for guided DAG-aware rewriting.
arXiv Detail & Related papers (2024-06-07T07:16:40Z) - Enhancing Quantum Optimization with Parity Network Synthesis [0.0]
We propose a pair of algorithms for parity network synthesis and linear circuit inversion.
Together, these algorithms can build the diagonal component of the QAOA circuit, generally the most expensive in terms of two qubit gates.
arXiv Detail & Related papers (2024-02-16T22:11:52Z) - Reducing measurement costs by recycling the Hessian in adaptive variational quantum algorithms [0.0]
We propose an improved quasi-Newton optimization protocol specifically tailored to adaptive VQAs.
We implement a quasi-Newton algorithm where an approximation to the inverse Hessian matrix is continuously built and grown across the iterations of an adaptive VQA.
arXiv Detail & Related papers (2024-01-10T14:08:04Z) - Uncovering mesa-optimization algorithms in Transformers [61.06055590704677]
Some autoregressive models can learn as an input sequence is processed, without undergoing any parameter changes, and without being explicitly trained to do so.
We show that standard next-token prediction error minimization gives rise to a subsidiary learning algorithm that adjusts the model as new inputs are revealed.
Our findings explain in-context learning as a product of autoregressive loss minimization and inform the design of new optimization-based Transformer layers.
arXiv Detail & Related papers (2023-09-11T22:42:50Z) - Transformers as Statisticians: Provable In-Context Learning with
In-Context Algorithm Selection [88.23337313766353]
This work first provides a comprehensive statistical theory for transformers to perform ICL.
We show that transformers can implement a broad class of standard machine learning algorithms in context.
A emphsingle transformer can adaptively select different base ICL algorithms.
arXiv Detail & Related papers (2023-06-07T17:59:31Z) - Performance Embeddings: A Similarity-based Approach to Automatic
Performance Optimization [71.69092462147292]
Performance embeddings enable knowledge transfer of performance tuning between applications.
We demonstrate this transfer tuning approach on case studies in deep neural networks, dense and sparse linear algebra compositions, and numerical weather prediction stencils.
arXiv Detail & Related papers (2023-03-14T15:51:35Z) - Full Stack Optimization of Transformer Inference: a Survey [58.55475772110702]
Transformer models achieve superior accuracy across a wide range of applications.
The amount of compute and bandwidth required for inference of recent Transformer models is growing at a significant rate.
There has been an increased focus on making Transformer models more efficient.
arXiv Detail & Related papers (2023-02-27T18:18:13Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.