AC-Refiner: Efficient Arithmetic Circuit Optimization Using Conditional Diffusion Models
- URL: http://arxiv.org/abs/2507.02598v2
- Date: Tue, 16 Sep 2025 12:23:53 GMT
- Title: AC-Refiner: Efficient Arithmetic Circuit Optimization Using Conditional Diffusion Models
- Authors: Chenhao Xue, Kezhi Li, Jiaxing Zhang, Yi Ren, Zhengyuan Shi, Chen Zhang, Yibo Lin, Lining Zhang, Qiang Xu, Guangyu Sun,
- Abstract summary: AC-Refiner is a novel arithmetic circuit optimization framework leveraging conditional diffusion models.<n>Our key insight is to reframe arithmetic circuit synthesis as a conditional image generation task.<n>By carefully conditioning the denoising diffusion process on target quality-of-results (QoRs), AC-Refiner consistently produces high-quality circuit designs.
- Score: 22.37560730551188
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Arithmetic circuits, such as adders and multipliers, are fundamental components of digital systems, directly impacting the performance, power efficiency, and area footprint. However, optimizing these circuits remains challenging due to the vast design space and complex physical constraints. While recent deep learning-based approaches have shown promise, they struggle to consistently explore high-potential design variants, limiting their optimization efficiency. To address this challenge, we propose AC-Refiner, a novel arithmetic circuit optimization framework leveraging conditional diffusion models. Our key insight is to reframe arithmetic circuit synthesis as a conditional image generation task. By carefully conditioning the denoising diffusion process on target quality-of-results (QoRs), AC-Refiner consistently produces high-quality circuit designs. Furthermore, the explored designs are used to fine-tune the diffusion model, which focuses the exploration near the Pareto frontier. Experimental results demonstrate that AC-Refiner generates designs with superior Pareto optimality, outperforming state-of-the-art baselines. The performance gain is further validated by integrating AC-Refiner into practical applications.
Related papers
- AstRL: Analog and Mixed-Signal Circuit Synthesis with Deep Reinforcement Learning [0.8633481456093186]
This work casts circuit design as a graph generation problem and introduces a novel method of AMS synthesis driven by deep reinforcement learning (AstRL)<n>AstRL generates circuits directly optimized for user-specified targets within a simulator-embedded environment that provides ground-truth feedback during training.<n> Experimental results for three realistic design tasks illustrate substantial improvements in conventional design metrics over state-of-the-art baselines.
arXiv Detail & Related papers (2026-02-12T20:52:39Z) - Prescriptive tool for zero-emissions building fenestration design using hybrid metaheuristic algorithms [0.29022435221103443]
This paper presents a novel simulation-based optimization method for fenestration designed for practical application.<n>It uses a hybrid metaheuristic algorithm and relies on rules and an updatable catalog, to fully automate the design process.<n>Nineteen fenestration variables, over which architects have design flexibility, were optimized to reduce heating, cooling demand, and thermal discomfort in residential buildings.
arXiv Detail & Related papers (2025-11-26T08:10:23Z) - Rethinking Autoregressive Models for Lossless Image Compression via Hierarchical Parallelism and Progressive Adaptation [75.58269386927076]
Autoregressive (AR) models are often dismissed as impractical due to prohibitive computational cost.<n>This work re-thinks this paradigm, introducing a framework built on hierarchical parallelism and progressive adaptation.<n> Experiments on diverse datasets (natural, satellite, medical) validate that our method achieves new state-of-the-art compression.
arXiv Detail & Related papers (2025-11-14T06:27:58Z) - Stiff Circuit System Modeling via Transformer [0.7816640928428988]
We propose a new approach using Crossformer, which is a current state-of-the-art Transformer model for time-series prediction tasks.<n>By leveraging the Crossformer's temporal representation capabilities and the enhanced feature extraction of KANs, our method achieves improved fidelity in predicting circuit responses to a wide range of input conditions.
arXiv Detail & Related papers (2025-10-06T01:13:45Z) - AUTOCIRCUIT-RL: Reinforcement Learning-Driven LLM for Automated Circuit Topology Generation [6.2730802180534155]
AUTOCIRCUIT-RL is a novel reinforcement learning-based framework for automated analog circuit synthesis.<n>It generates 12% more valid circuits and improves efficiency by 14% compared to the best baselines.<n>It achieves over 60% success in valid circuits with limited training data, demonstrating strong generalization.
arXiv Detail & Related papers (2025-06-03T17:54:30Z) - InvFussion: Bridging Supervised and Zero-shot Diffusion for Inverse Problems [76.39776789410088]
This work introduces a framework that combines the strong performance of supervised approaches and the flexibility of zero-shot methods.<n>A novel architectural design seamlessly integrates the degradation operator directly into the denoiser.<n> Experimental results on the FFHQ and ImageNet datasets demonstrate state-of-the-art posterior-sampling performance.
arXiv Detail & Related papers (2025-04-02T12:40:57Z) - Architect of the Bits World: Masked Autoregressive Modeling for Circuit Generation Guided by Truth Table [5.300504429005315]
We propose a novel approach integrating conditional generative models with differentiable architecture search (DAS) for circuit generation.<n>Our approach first introduces CircuitVQ, a circuit tokenizer trained based on our Circuit AutoEncoder.<n>We then develop CircuitAR, a masked autoregressive model leveraging CircuitVQ as the tokenizer.
arXiv Detail & Related papers (2025-02-18T11:13:03Z) - Efficient Diffusion as Low Light Enhancer [63.789138528062225]
Reflectance-Aware Trajectory Refinement (RATR) is a simple yet effective module to refine the teacher trajectory using the reflectance component of images.
textbfReflectance-aware textbfDiffusion with textbfDistilled textbfTrajectory (textbfReDDiT) is an efficient and flexible distillation framework tailored for Low-Light Image Enhancement (LLIE)
arXiv Detail & Related papers (2024-10-16T08:07:18Z) - Improved Noise Schedule for Diffusion Training [51.849746576387375]
We propose a novel approach to design the noise schedule for enhancing the training of diffusion models.<n>We empirically demonstrate the superiority of our noise schedule over the standard cosine schedule.
arXiv Detail & Related papers (2024-07-03T17:34:55Z) - The Missing U for Efficient Diffusion Models [3.712196074875643]
Diffusion Probabilistic Models yield record-breaking performance in tasks such as image synthesis, video generation, and molecule design.
Despite their capabilities, their efficiency, especially in the reverse process, remains a challenge due to slow convergence rates and high computational costs.
We introduce an approach that leverages continuous dynamical systems to design a novel denoising network for diffusion models.
arXiv Detail & Related papers (2023-10-31T00:12:14Z) - Aligning Optimization Trajectories with Diffusion Models for Constrained
Design Generation [17.164961143132473]
We introduce a learning framework that demonstrates the efficacy of aligning the sampling trajectory of diffusion models with the optimization trajectory derived from traditional physics-based methods.
Our method allows for generating feasible and high-performance designs in as few as two steps without the need for expensive preprocessing, external surrogate models, or additional labeled data.
Our results demonstrate that TA outperforms state-of-the-art deep generative models on in-distribution configurations and halves the inference computational cost.
arXiv Detail & Related papers (2023-05-29T09:16:07Z) - Conditional Denoising Diffusion for Sequential Recommendation [62.127862728308045]
Two prominent generative models, Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs)
GANs suffer from unstable optimization, while VAEs are prone to posterior collapse and over-smoothed generations.
We present a conditional denoising diffusion model, which includes a sequence encoder, a cross-attentive denoising decoder, and a step-wise diffuser.
arXiv Detail & Related papers (2023-04-22T15:32:59Z) - Data-Driven Chance Constrained AC-OPF using Hybrid Sparse Gaussian
Processes [57.70237375696411]
The paper proposes a fast data-driven setup that uses the sparse and hybrid Gaussian processes (GP) framework to model the power flow equations with input uncertainty.
We advocate the efficiency of the proposed approach by a numerical study over multiple IEEE test cases showing up to two times faster and more accurate solutions.
arXiv Detail & Related papers (2022-08-30T09:27:59Z) - Fourier Space Losses for Efficient Perceptual Image Super-Resolution [131.50099891772598]
We show that it is possible to improve the performance of a recently introduced efficient generator architecture solely with the application of our proposed loss functions.
We show that our losses' direct emphasis on the frequencies in Fourier-space significantly boosts the perceptual image quality.
The trained generator achieves comparable results with and is 2.4x and 48x faster than state-of-the-art perceptual SR methods RankSRGAN and SRFlow respectively.
arXiv Detail & Related papers (2021-06-01T20:34:52Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.