Inverse Design in Distributed Circuits Using Single-Step Reinforcement Learning
- URL: http://arxiv.org/abs/2506.08029v1
- Date: Mon, 02 Jun 2025 02:31:52 GMT
- Title: Inverse Design in Distributed Circuits Using Single-Step Reinforcement Learning
- Authors: Jiayu Li, Masood Mortazavi, Ning Yan, Yihong Ma, Reza Zafarani,
- Abstract summary: DCIDA is a design exploration framework that learns a near-optimal design sampling policy for a target transfer function.<n>Our experiments demonstrate DCIDA's Transformer-based policy network achieves significant reductions in design error.
- Score: 10.495642893440351
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The goal of inverse design in distributed circuits is to generate near-optimal designs that meet a desirable transfer function specification. Existing design exploration methods use some combination of strategies involving artificial grids, differentiable evaluation procedures, and specific template topologies. However, real-world design practices often require non-differentiable evaluation procedures, varying topologies, and near-continuous placement spaces. In this paper, we propose DCIDA, a design exploration framework that learns a near-optimal design sampling policy for a target transfer function. DCIDA decides all design factors in a compound single-step action by sampling from a set of jointly-trained conditional distributions generated by the policy. Utilizing an injective interdependent ``map", DCIDA transforms raw sampled design ``actions" into uniquely equivalent physical representations, enabling the framework to learn the conditional dependencies among joint ``raw'' design decisions. Our experiments demonstrate DCIDA's Transformer-based policy network achieves significant reductions in design error compared to state-of-the-art approaches, with significantly better fit in cases involving more complex transfer functions.
Related papers
- Unified modality separation: A vision-language framework for unsupervised domain adaptation [60.8391821117794]
Unsupervised domain adaptation (UDA) enables models trained on a labeled source domain to handle new unlabeled domains.<n>We propose a unified modality separation framework that accommodates both modality-specific and modality-invariant components.<n>Our methods achieve up to 9% performance gain with 9 times of computational efficiencies.
arXiv Detail & Related papers (2025-08-07T02:51:10Z) - Equivariant Goal Conditioned Contrastive Reinforcement Learning [5.019456977535218]
Contrastive Reinforcement Learning (CRL) provides a promising framework for extracting useful structured representations from unlabeled interactions.<n>We propose Equivariant CRL, which further structures the latent space using equivariant constraints.<n>Our approach consistently outperforms strong baselines across a range of simulated tasks in both state-based and image-based settings.
arXiv Detail & Related papers (2025-07-22T01:13:45Z) - CORE: Constraint-Aware One-Step Reinforcement Learning for Simulation-Guided Neural Network Accelerator Design [3.549422886703227]
CORE is a constraint-aware, one-step reinforcement learning method for simulationguided DSE.<n>We instantiate CORE for hardware-mapping co-design of neural network accelerators.
arXiv Detail & Related papers (2025-06-04T01:08:34Z) - Compositional Generative Inverse Design [69.22782875567547]
Inverse design, where we seek to design input variables in order to optimize an underlying objective function, is an important problem.
We show that by instead optimizing over the learned energy function captured by the diffusion model, we can avoid such adversarial examples.
In an N-body interaction task and a challenging 2D multi-airfoil design task, we demonstrate that by composing the learned diffusion model at test time, our method allows us to design initial states and boundary shapes.
arXiv Detail & Related papers (2024-01-24T01:33:39Z) - HCVP: Leveraging Hierarchical Contrastive Visual Prompt for Domain
Generalization [69.33162366130887]
Domain Generalization (DG) endeavors to create machine learning models that excel in unseen scenarios by learning invariant features.
We introduce a novel method designed to supplement the model with domain-level and task-specific characteristics.
This approach aims to guide the model in more effectively separating invariant features from specific characteristics, thereby boosting the generalization.
arXiv Detail & Related papers (2024-01-18T04:23:21Z) - Generative Inverse Design of Metamaterials with Functional Responses by Interpretable Learning [3.931881794708454]
We propose the Random-forest-based Interpretable Generative Inverse Design (RIGID)
RIGID is a single-shot inverse design method for fast generation of metamaterial designs with on-demand functional behaviors.
We validate RIGID on acoustic and optical metamaterial design problems, each with fewer than 250 training samples.
arXiv Detail & Related papers (2023-12-08T04:24:03Z) - Dual Adaptive Representation Alignment for Cross-domain Few-shot
Learning [58.837146720228226]
Few-shot learning aims to recognize novel queries with limited support samples by learning from base knowledge.
Recent progress in this setting assumes that the base knowledge and novel query samples are distributed in the same domains.
We propose to address the cross-domain few-shot learning problem where only extremely few samples are available in target domains.
arXiv Detail & Related papers (2023-06-18T09:52:16Z) - Protein Design with Guided Discrete Diffusion [67.06148688398677]
A popular approach to protein design is to combine a generative model with a discriminative model for conditional sampling.
We propose diffusioN Optimized Sampling (NOS), a guidance method for discrete diffusion models.
NOS makes it possible to perform design directly in sequence space, circumventing significant limitations of structure-based methods.
arXiv Detail & Related papers (2023-05-31T16:31:24Z) - Theta-Resonance: A Single-Step Reinforcement Learning Method for Design
Space Exploration [10.184056098238766]
We use Theta-Resonance to train an intelligent agent producing progressively more optimal samples.
We specialize existing policy gradient algorithms in deep reinforcement learning (D-RL) to update our policy network.
Although we only present categorical design spaces, we also outline how to use Theta-Resonance in order to explore continuous and mixed continuous-discrete design spaces.
arXiv Detail & Related papers (2022-11-03T16:08:40Z) - Proximal Policy Optimization-based Transmit Beamforming and Phase-shift
Design in an IRS-aided ISAC System for the THz Band [90.45915557253385]
IRS-aided integrated sensing and communications (ISAC) system operating in the terahertz (THz) band is proposed to maximize the system capacity.
Transmit beamforming and phase-shift design are transformed into a universal optimization problem with ergodic constraints.
arXiv Detail & Related papers (2022-03-21T09:15:18Z) - Robust Topology Optimization Using Multi-Fidelity Variational Autoencoders [1.0124625066746595]
A robust topology optimization (RTO) problem identifies a design with the best average performance.
A neural network method is proposed that offers computational efficiency.
Numerical application of the method is shown on the robust design of L-bracket structure with single point load as well as multiple point loads.
arXiv Detail & Related papers (2021-07-19T20:40:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.