DeepSeq: Deep Sequential Circuit Learning
- URL: http://arxiv.org/abs/2302.13608v2
- Date: Sun, 12 Nov 2023 15:49:17 GMT
- Title: DeepSeq: Deep Sequential Circuit Learning
- Authors: Sadaf Khan, Zhengyuan Shi, Min Li, Qiang Xu
- Abstract summary: Circuit representation learning is a promising research direction in the electronic design automation (EDA) field.
Existing solutions only target combinational circuits, significantly limiting their applications.
We propose DeepSeq, a novel representation learning framework for sequential netlists.
- Score: 10.402436619244911
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Circuit representation learning is a promising research direction in the
electronic design automation (EDA) field. With sufficient data for
pre-training, the learned general yet effective representation can help to
solve multiple downstream EDA tasks by fine-tuning it on a small set of
task-related data. However, existing solutions only target combinational
circuits, significantly limiting their applications. In this work, we propose
DeepSeq, a novel representation learning framework for sequential netlists.
Specifically, we introduce a dedicated graph neural network (GNN) with a
customized propagation scheme to exploit the temporal correlations between
gates in sequential circuits. To ensure effective learning, we propose to use a
multi-task training objective with two sets of strongly related supervision:
logic probability and transition probability at each node. A novel dual
attention aggregation mechanism is introduced to facilitate learning both tasks
efficiently. Experimental results on various benchmark circuits show that
DeepSeq outperforms other GNN models for sequential circuit learning. We
evaluate the generalization capability of DeepSeq on a downstream power
estimation task. After fine-tuning, DeepSeq can accurately estimate power
across various circuits under different workloads.
Related papers
- DeepSeq2: Enhanced Sequential Circuit Learning with Disentangled Representations [9.79382991471473]
We introduce DeepSeq2, a novel framework that enhances the learning of sequential circuits.
By employing an efficient Directed Acyclic Graph Neural Network (DAG-GNN), DeepSeq2 significantly reduces execution times and improves model scalability.
DeepSeq2 sets a new benchmark in sequential circuit representation learning, outperforming prior works in power estimation and reliability analysis.
arXiv Detail & Related papers (2024-11-01T11:57:42Z) - Provable Multi-Task Representation Learning by Two-Layer ReLU Neural Networks [69.38572074372392]
We present the first results proving that feature learning occurs during training with a nonlinear model on multiple tasks.
Our key insight is that multi-task pretraining induces a pseudo-contrastive loss that favors representations that align points that typically have the same label across tasks.
arXiv Detail & Related papers (2023-07-13T16:39:08Z) - Dynamic Perceiver for Efficient Visual Recognition [87.08210214417309]
We propose Dynamic Perceiver (Dyn-Perceiver) to decouple the feature extraction procedure and the early classification task.
A feature branch serves to extract image features, while a classification branch processes a latent code assigned for classification tasks.
Early exits are placed exclusively within the classification branch, thus eliminating the need for linear separability in low-level features.
arXiv Detail & Related papers (2023-06-20T03:00:22Z) - DeepGate2: Functionality-Aware Circuit Representation Learning [10.75166513491573]
Circuit representation learning aims to obtain neural representations of circuit elements.
Existing solutions, such as DeepGate, have the potential to embed both circuit structural information and functional behavior.
We introduce DeepGate2, a novel functionality-aware learning framework.
arXiv Detail & Related papers (2023-05-25T13:51:12Z) - OFA$^2$: A Multi-Objective Perspective for the Once-for-All Neural
Architecture Search [79.36688444492405]
Once-for-All (OFA) is a Neural Architecture Search (NAS) framework designed to address the problem of searching efficient architectures for devices with different resources constraints.
We aim to give one step further in the search for efficiency by explicitly conceiving the search stage as a multi-objective optimization problem.
arXiv Detail & Related papers (2023-03-23T21:30:29Z) - Arch-Graph: Acyclic Architecture Relation Predictor for
Task-Transferable Neural Architecture Search [96.31315520244605]
Arch-Graph is a transferable NAS method that predicts task-specific optimal architectures.
We show Arch-Graph's transferability and high sample efficiency across numerous tasks.
It is able to find top 0.16% and 0.29% architectures on average on two search spaces under the budget of only 50 models.
arXiv Detail & Related papers (2022-04-12T16:46:06Z) - Pretraining Graph Neural Networks for few-shot Analog Circuit Modeling
and Design [68.1682448368636]
We present a supervised pretraining approach to learn circuit representations that can be adapted to new unseen topologies or unseen prediction tasks.
To cope with the variable topological structure of different circuits we describe each circuit as a graph and use graph neural networks (GNNs) to learn node embeddings.
We show that pretraining GNNs on prediction of output node voltages can encourage learning representations that can be adapted to new unseen topologies or prediction of new circuit level properties.
arXiv Detail & Related papers (2022-03-29T21:18:47Z) - Representation Learning of Logic Circuits [7.614021815435811]
We propose a novel representation learning solution that embeds both logic function and structural information of a circuit as vectors on each gate.
Specifically, we propose transforming circuits into unified and-inverter graph format for learning.
We then introduce a novel graph neural network that uses strong inductive biases in practical circuits as learning priors for signal probability prediction.
arXiv Detail & Related papers (2021-11-26T05:57:05Z) - CATCH: Context-based Meta Reinforcement Learning for Transferrable
Architecture Search [102.67142711824748]
CATCH is a novel Context-bAsed meTa reinforcement learning algorithm for transferrable arChitecture searcH.
The combination of meta-learning and RL allows CATCH to efficiently adapt to new tasks while being agnostic to search spaces.
It is also capable of handling cross-domain architecture search as competitive networks on ImageNet, COCO, and Cityscapes are identified.
arXiv Detail & Related papers (2020-07-18T09:35:53Z) - Auto-MAP: A DQN Framework for Exploring Distributed Execution Plans for
DNN Workloads [11.646744408920764]
Auto-MAP is a framework for exploring distributed execution plans for workloads.
It can automatically discovering fast parallelization strategies through reinforcement learning on IR level of deep learning models.
Our evaluation shows that Auto-MAP can find the optimal solution in two hours, while achieving better throughput on several NLP and convolution models.
arXiv Detail & Related papers (2020-07-08T12:38:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.