DeepGate2: Functionality-Aware Circuit Representation Learning
- URL: http://arxiv.org/abs/2305.16373v1
- Date: Thu, 25 May 2023 13:51:12 GMT
- Title: DeepGate2: Functionality-Aware Circuit Representation Learning
- Authors: Zhengyuan Shi, Hongyang Pan, Sadaf Khan, Min Li, Yi Liu, Junhua Huang,
Hui-Ling Zhen, Mingxuan Yuan, Zhufei Chu and Qiang Xu
- Abstract summary: Circuit representation learning aims to obtain neural representations of circuit elements.
Existing solutions, such as DeepGate, have the potential to embed both circuit structural information and functional behavior.
We introduce DeepGate2, a novel functionality-aware learning framework.
- Score: 10.75166513491573
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Circuit representation learning aims to obtain neural representations of
circuit elements and has emerged as a promising research direction that can be
applied to various EDA and logic reasoning tasks. Existing solutions, such as
DeepGate, have the potential to embed both circuit structural information and
functional behavior. However, their capabilities are limited due to weak
supervision or flawed model design, resulting in unsatisfactory performance in
downstream tasks. In this paper, we introduce DeepGate2, a novel
functionality-aware learning framework that significantly improves upon the
original DeepGate solution in terms of both learning effectiveness and
efficiency. Our approach involves using pairwise truth table differences
between sampled logic gates as training supervision, along with a well-designed
and scalable loss function that explicitly considers circuit functionality.
Additionally, we consider inherent circuit characteristics and design an
efficient one-round graph neural network (GNN), resulting in an order of
magnitude faster learning speed than the original DeepGate solution.
Experimental results demonstrate significant improvements in two practical
downstream tasks: logic synthesis and Boolean satisfiability solving. The code
is available at https://github.com/cure-lab/DeepGate2
Related papers
- DeepSeq2: Enhanced Sequential Circuit Learning with Disentangled Representations [9.79382991471473]
We introduce DeepSeq2, a novel framework that enhances the learning of sequential circuits.
By employing an efficient Directed Acyclic Graph Neural Network (DAG-GNN), DeepSeq2 significantly reduces execution times and improves model scalability.
DeepSeq2 sets a new benchmark in sequential circuit representation learning, outperforming prior works in power estimation and reliability analysis.
arXiv Detail & Related papers (2024-11-01T11:57:42Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - DeepSeq: Deep Sequential Circuit Learning [10.402436619244911]
Circuit representation learning is a promising research direction in the electronic design automation (EDA) field.
Existing solutions only target combinational circuits, significantly limiting their applications.
We propose DeepSeq, a novel representation learning framework for sequential netlists.
arXiv Detail & Related papers (2023-02-27T09:17:35Z) - Biologically Plausible Learning on Neuromorphic Hardware Architectures [27.138481022472]
Neuromorphic computing is an emerging paradigm that confronts this imbalance by computations directly in analog memories.
This work is the first to compare the impact of different learning algorithms on Compute-In-Memory-based hardware and vice versa.
arXiv Detail & Related papers (2022-12-29T15:10:59Z) - Planning for Sample Efficient Imitation Learning [52.44953015011569]
Current imitation algorithms struggle to achieve high performance and high in-environment sample efficiency simultaneously.
We propose EfficientImitate, a planning-based imitation learning method that can achieve high in-environment sample efficiency and performance simultaneously.
Experimental results show that EI achieves state-of-the-art results in performance and sample efficiency.
arXiv Detail & Related papers (2022-10-18T05:19:26Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Towards Scaling Difference Target Propagation by Learning Backprop
Targets [64.90165892557776]
Difference Target Propagation is a biologically-plausible learning algorithm with close relation with Gauss-Newton (GN) optimization.
We propose a novel feedback weight training scheme that ensures both that DTP approximates BP and that layer-wise feedback weight training can be restored.
We report the best performance ever achieved by DTP on CIFAR-10 and ImageNet.
arXiv Detail & Related papers (2022-01-31T18:20:43Z) - Representation Learning of Logic Circuits [7.614021815435811]
We propose a novel representation learning solution that embeds both logic function and structural information of a circuit as vectors on each gate.
Specifically, we propose transforming circuits into unified and-inverter graph format for learning.
We then introduce a novel graph neural network that uses strong inductive biases in practical circuits as learning priors for signal probability prediction.
arXiv Detail & Related papers (2021-11-26T05:57:05Z) - Suppress and Balance: A Simple Gated Network for Salient Object
Detection [89.88222217065858]
We propose a simple gated network (GateNet) to solve both issues at once.
With the help of multilevel gate units, the valuable context information from the encoder can be optimally transmitted to the decoder.
In addition, we adopt the atrous spatial pyramid pooling based on the proposed "Fold" operation (Fold-ASPP) to accurately localize salient objects of various scales.
arXiv Detail & Related papers (2020-07-16T02:00:53Z) - Learning to Hash with Graph Neural Networks for Recommender Systems [103.82479899868191]
Graph representation learning has attracted much attention in supporting high quality candidate search at scale.
Despite its effectiveness in learning embedding vectors for objects in the user-item interaction network, the computational costs to infer users' preferences in continuous embedding space are tremendous.
We propose a simple yet effective discrete representation learning framework to jointly learn continuous and discrete codes.
arXiv Detail & Related papers (2020-03-04T06:59:56Z) - Evolving Neural Networks through a Reverse Encoding Tree [9.235550900581764]
This paper advances a method which incorporates a type of topological edge coding, named Reverse HANG Tree (RET), for evolving scalable neural networks efficiently.
Using RET, two types of approaches -- NEAT with Binary search encoding (Bi-NEAT) and NEAT with Golden-Section search encoding (GS-NEAT) -- have been designed to solve problems in benchmark continuous learning environments.
arXiv Detail & Related papers (2020-02-03T02:29:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.