Iterative Circuit Repair Against Formal Specifications
- URL: http://arxiv.org/abs/2303.01158v1
- Date: Thu, 2 Mar 2023 11:05:10 GMT
- Title: Iterative Circuit Repair Against Formal Specifications
- Authors: Matthias Cosler, Frederik Schmitt, Christopher Hahn, Bernd Finkbeiner
- Abstract summary: We present a deep learning approach for repairing sequential circuits against formal specifications given in linear-time temporal logic (LTL)
We propose a separated hierarchical Transformer for multimodal representation learning of the formal specification and the circuit.
Our proposed repair mechanism significantly improves the automated synthesis of circuits from specifications with Transformers.
- Score: 3.7277730514654555
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a deep learning approach for repairing sequential circuits against
formal specifications given in linear-time temporal logic (LTL). Given a
defective circuit and its formal specification, we train Transformer models to
output circuits that satisfy the corresponding specification. We propose a
separated hierarchical Transformer for multimodal representation learning of
the formal specification and the circuit. We introduce a data generation
algorithm that enables generalization to more complex specifications and
out-of-distribution datasets. In addition, our proposed repair mechanism
significantly improves the automated synthesis of circuits from LTL
specifications with Transformers. It improves the state-of-the-art by $6.8$
percentage points on held-out instances and $11.8$ percentage points on an
out-of-distribution dataset from the annual reactive synthesis competition.
Related papers
- CktGen: Specification-Conditioned Analog Circuit Generation [28.780603785886242]
We introduce a task that directly generates analog circuits based on specified specifications.
Specifically, we propose CktGen, a simple yet effective variational autoencoder (VAE) model.
We conduct comprehensive experiments on the Open Circuit Benchmark (OCB) and introduce new evaluation metrics for cross-model consistency.
arXiv Detail & Related papers (2024-10-01T18:35:44Z) - LaMAGIC: Language-Model-based Topology Generation for Analog Integrated Circuits [17.002169206594793]
We introduce LaMAGIC, a pioneering language model-based topology generation model.
LaMAGIC can efficiently generate an optimized circuit design from the custom specification in a single pass.
LaMAGIC achieves a success rate of up to 96% under a strict tolerance of 0.01.
arXiv Detail & Related papers (2024-07-19T22:51:41Z) - Logic Synthesis with Generative Deep Neural Networks [20.8279111910994]
We introduce a logic synthesis rewriting operator based on the Circuit Transformer model, named "ctrw" (Circuit Transformer Rewriting)
We propose two-stage training scheme for the Circuit Transformer tailored for logic, with iterative improvement of optimality through self-improvement training.
We also integrate the Circuit Transformer with state-of-the-art rewriting techniques to address scalability issues, allowing for guided DAG-aware rewriting.
arXiv Detail & Related papers (2024-06-07T07:16:40Z) - Circuit Transformer: End-to-end Circuit Design by Predicting the Next Gate [20.8279111910994]
Language, a prominent human ability to express through sequential symbols, has been computationally mastered by recent advances of large language models (LLMs)
LLMs have shown unprecedented capabilities in understanding and reasoning.
Can circuits also be mastered by a a sufficiently large "circuit model", which can conquer electronic design tasks by simply predicting the next logic gate?
arXiv Detail & Related papers (2024-03-14T03:24:14Z) - How Do Transformers Learn In-Context Beyond Simple Functions? A Case
Study on Learning with Representations [98.7450564309923]
This paper takes initial steps on understanding in-context learning (ICL) in more complex scenarios, by studying learning with representations.
We construct synthetic in-context learning problems with a compositional structure, where the label depends on the input through a possibly complex but fixed representation function.
We show theoretically the existence of transformers that approximately implement such algorithms with mild depth and size.
arXiv Detail & Related papers (2023-10-16T17:40:49Z) - CktGNN: Circuit Graph Neural Network for Electronic Design Automation [67.29634073660239]
This paper presents a Circuit Graph Neural Network (CktGNN) that simultaneously automates the circuit topology generation and device sizing.
We introduce Open Circuit Benchmark (OCB), an open-sourced dataset that contains $10$K distinct operational amplifiers.
Our work paves the way toward a learning-based open-sourced design automation for analog circuits.
arXiv Detail & Related papers (2023-08-31T02:20:25Z) - Adaptive Planning Search Algorithm for Analog Circuit Verification [53.97809573610992]
We propose a machine learning (ML) approach, which uses less simulations.
We show that the proposed approach is able to provide OCCs closer to the specifications for all circuits.
arXiv Detail & Related papers (2023-06-23T12:57:46Z) - Transformers as Statisticians: Provable In-Context Learning with
In-Context Algorithm Selection [88.23337313766353]
This work first provides a comprehensive statistical theory for transformers to perform ICL.
We show that transformers can implement a broad class of standard machine learning algorithms in context.
A emphsingle transformer can adaptively select different base ICL algorithms.
arXiv Detail & Related papers (2023-06-07T17:59:31Z) - Learning a Fourier Transform for Linear Relative Positional Encodings in Transformers [71.32827362323205]
We propose a new class of linear Transformers calledLearner-Transformers (Learners)
They incorporate a wide range of relative positional encoding mechanisms (RPEs)
These include regular RPE techniques applied for sequential data, as well as novel RPEs operating on geometric data embedded in higher-dimensional Euclidean spaces.
arXiv Detail & Related papers (2023-02-03T18:57:17Z) - Learning Bounded Context-Free-Grammar via LSTM and the
Transformer:Difference and Explanations [51.77000472945441]
Long Short-Term Memory (LSTM) and Transformers are two popular neural architectures used for natural language processing tasks.
In practice, it is often observed that Transformer models have better representation power than LSTM.
We study such practical differences between LSTM and Transformer and propose an explanation based on their latent space decomposition patterns.
arXiv Detail & Related papers (2021-12-16T19:56:44Z) - Neural Circuit Synthesis from Specification Patterns [5.7923858184309385]
We train hierarchical Transformers on the task of synthesizing hardware circuits directly out of high-level logical specifications.
New approaches using machine learning might open a lot of possibilities in this area, but suffer from the lack of sufficient amounts of training data.
We show that hierarchical Transformers trained on this synthetic data solve a significant portion of problems from the synthesis competitions.
arXiv Detail & Related papers (2021-07-25T18:17:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.