ShortCircuit: AlphaZero-Driven Circuit Design
- URL: http://arxiv.org/abs/2408.09858v2
- Date: Wed, 2 Oct 2024 12:22:10 GMT
- Title: ShortCircuit: AlphaZero-Driven Circuit Design
- Authors: Dimitrios Tsaras, Antoine Grosnit, Lei Chen, Zhiyao Xie, Haitham Bou-Ammar, Mingxuan Yuan,
- Abstract summary: Chip design relies heavily on generating circuits, such as AND-Inverter Graphs (AIGs) from functional descriptions like truth tables.
Recent advances in deep learning have aimed to accelerate circuit design, but these efforts have mostly focused on tasks other than synthesis.
We introduce ShortCircuit, a novel transformer-based architecture that leverages the structural properties of AIGs and performs efficient space exploration.
- Score: 12.3162550019215
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Chip design relies heavily on generating Boolean circuits, such as AND-Inverter Graphs (AIGs), from functional descriptions like truth tables. This generation operation is a key process in logic synthesis, a primary chip design stage. While recent advances in deep learning have aimed to accelerate circuit design, these efforts have mostly focused on tasks other than synthesis, and traditional heuristic methods have plateaued. In this paper, we introduce ShortCircuit, a novel transformer-based architecture that leverages the structural properties of AIGs and performs efficient space exploration. Contrary to prior approaches attempting end-to-end generation of logic circuits using deep networks, ShortCircuit employs a two-phase process combining supervised with reinforcement learning to enhance generalization to unseen truth tables. We also propose an AlphaZero variant to handle the double exponentially large state space and the reward sparsity, enabling the discovery of near-optimal designs. To evaluate the generative performance of our model , we extract 500 truth tables from a set of 20 real-world circuits. ShortCircuit successfully generates AIGs for $98\%$ of the 8-input test truth tables, and outperforms the state-of-the-art logic synthesis tool, ABC, by $18.62\%$ in terms of circuits size.
Related papers
- Benchmarking End-To-End Performance of AI-Based Chip Placement Algorithms [77.71341200638416]
ChiPBench is a benchmark designed to evaluate the effectiveness of AI-based chip placement algorithms.
We have gathered 20 circuits from various domains (e.g., CPU, GPU, and microcontrollers) for evaluation.
Results show that even if intermediate metric of a single-point algorithm is dominant, the final PPA results are unsatisfactory.
arXiv Detail & Related papers (2024-07-03T03:29:23Z) - Finding Transformer Circuits with Edge Pruning [71.12127707678961]
We propose Edge Pruning as an effective and scalable solution to automated circuit discovery.
Our method finds circuits in GPT-2 that use less than half the number of edges compared to circuits found by previous methods.
Thanks to its efficiency, we scale Edge Pruning to CodeLlama-13B, a model over 100x the scale that prior methods operate on.
arXiv Detail & Related papers (2024-06-24T16:40:54Z) - CIRCUITSYNTH: Leveraging Large Language Models for Circuit Topology Synthesis [7.131266114437393]
We introduce CIRCUITSYNTH, a novel approach that harnesses LLMs to facilitate the automated synthesis of valid circuit topologies.
Our approach lays the foundation for future research aimed at enhancing circuit efficiency and specifying output voltage.
arXiv Detail & Related papers (2024-06-06T01:59:59Z) - Circuit Transformer: End-to-end Circuit Design by Predicting the Next Gate [20.8279111910994]
Language, a prominent human ability to express through sequential symbols, has been computationally mastered by recent advances of large language models (LLMs)
LLMs have shown unprecedented capabilities in understanding and reasoning.
Can circuits also be mastered by a a sufficiently large "circuit model", which can conquer electronic design tasks by simply predicting the next logic gate?
arXiv Detail & Related papers (2024-03-14T03:24:14Z) - Retrieval-Guided Reinforcement Learning for Boolean Circuit Minimization [23.075466444266528]
This study conducts a thorough examination of learning and search techniques for logic synthesis.
We present ABC-RL, a meticulously tuned $alpha$ parameter that adeptly adjusts recommendations from pre-trained agents during the search process.
Our findings showcase substantial enhancements in the Quality-of-result (QoR) of synthesized circuits, boasting improvements of up to 24.8% compared to state-of-the-art techniques.
arXiv Detail & Related papers (2024-01-22T18:46:30Z) - CktGNN: Circuit Graph Neural Network for Electronic Design Automation [67.29634073660239]
This paper presents a Circuit Graph Neural Network (CktGNN) that simultaneously automates the circuit topology generation and device sizing.
We introduce Open Circuit Benchmark (OCB), an open-sourced dataset that contains $10$K distinct operational amplifiers.
Our work paves the way toward a learning-based open-sourced design automation for analog circuits.
arXiv Detail & Related papers (2023-08-31T02:20:25Z) - A Circuit Domain Generalization Framework for Efficient Logic Synthesis
in Chip Design [92.63517027087933]
A key task in Logic Synthesis (LS) is to transform circuits into simplified circuits with equivalent functionalities.
To tackle this task, many LS operators apply transformations to subgraphs -- rooted at each node on an input DAG -- sequentially.
We propose a novel data-driven LS operator paradigm, namely PruneX, to reduce ineffective transformations.
arXiv Detail & Related papers (2023-08-22T16:18:48Z) - INVICTUS: Optimizing Boolean Logic Circuit Synthesis via Synergistic
Learning and Search [18.558280701880136]
State-of-the-art logic synthesis algorithms have a large number of logic minimizations.
INVICTUS generates a sequence of logic minimizations based on a training dataset of previously seen designs.
arXiv Detail & Related papers (2023-05-22T15:50:42Z) - Graph Neural Network Autoencoders for Efficient Quantum Circuit
Optimisation [69.43216268165402]
We present for the first time how to use graph neural network (GNN) autoencoders for the optimisation of quantum circuits.
We construct directed acyclic graphs from the quantum circuits, encode the graphs and use the encodings to represent RL states.
Our method is the first realistic first step towards very large scale RL quantum circuit optimisation.
arXiv Detail & Related papers (2023-03-06T16:51:30Z) - Pretraining Graph Neural Networks for few-shot Analog Circuit Modeling
and Design [68.1682448368636]
We present a supervised pretraining approach to learn circuit representations that can be adapted to new unseen topologies or unseen prediction tasks.
To cope with the variable topological structure of different circuits we describe each circuit as a graph and use graph neural networks (GNNs) to learn node embeddings.
We show that pretraining GNNs on prediction of output node voltages can encourage learning representations that can be adapted to new unseen topologies or prediction of new circuit level properties.
arXiv Detail & Related papers (2022-03-29T21:18:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.