Causal-Guided Dimension Reduction for Efficient Pareto Optimization
- URL: http://arxiv.org/abs/2510.09941v1
- Date: Sat, 11 Oct 2025 00:41:04 GMT
- Title: Causal-Guided Dimension Reduction for Efficient Pareto Optimization
- Authors: Dinithi Jayasuriya, Divake Kumar, Sureshkumar Senthilkumar, Devashri Naik, Nastaran Darabi, Amit Ranjan Trivedi,
- Abstract summary: CaDRO builds a causal map through a hybrid observational-interventional process, ranking parameters by their causal effect on the objectives.<n>Low-impact parameters are fixed to values from high-quality solutions, while critical drivers remain active in the search.<n>Across amplifiers, regulators, and RF circuits, CaDRO converges up to 10$times$ faster than NSGA-II.
- Score: 2.9013001432962255
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-objective optimization of analog circuits is hindered by high-dimensional parameter spaces, strong feedback couplings, and expensive transistor-level simulations. Evolutionary algorithms such as Non-dominated Sorting Genetic Algorithm II (NSGA-II) are widely used but treat all parameters equally, thereby wasting effort on variables with little impact on performance, which limits their scalability. We introduce CaDRO, a causal-guided dimensionality reduction framework that embeds causal discovery into the optimization pipeline. CaDRO builds a quantitative causal map through a hybrid observational-interventional process, ranking parameters by their causal effect on the objectives. Low-impact parameters are fixed to values from high-quality solutions, while critical drivers remain active in the search. The reduced design space enables focused evolutionary optimization without modifying the underlying algorithm. Across amplifiers, regulators, and RF circuits, CaDRO converges up to 10$\times$ faster than NSGA-II while preserving or improving Pareto quality. For instance, on the Folded-Cascode Amplifier, hypervolume improves from 0.56 to 0.94, and on the LDO regulator from 0.65 to 0.81, with large gains in non-dominated solutions.
Related papers
- Efficient Differentiable Causal Discovery via Reliable Super-Structure Learning [51.20606796019663]
We propose ALVGL, a novel and general enhancement to the differentiable causal discovery pipeline.<n>ALVGL employs a sparse and low-rank decomposition to learn the precision matrix of the data.<n>We show that ALVGL not only achieves state-of-the-art accuracy but also significantly improves optimization efficiency.
arXiv Detail & Related papers (2026-01-09T02:18:59Z) - Problem-Parameter-Free Decentralized Bilevel Optimization [31.15538292038612]
Decentralized bilevel optimization has garnered significant attention due to its critical role in solving large-scale machine learning problems.<n>In this paper, we propose AdaSDBO, a fully problem- parameter-free algorithm for decentralized bilevel optimization with a single-loop structure.<n>Through rigorous theoretical analysis, we establish that AdaSDBO achieves a convergence rate of $widetildemathcalOleft(frac1Tright)$, matching the performance of well-tuned state-of-the-art methods up to polylogarithmic factors.
arXiv Detail & Related papers (2025-10-28T10:50:04Z) - Every Attention Matters: An Efficient Hybrid Architecture for Long-Context Reasoning [73.10669391954801]
We present the Ring-linear model series, specifically including Ring-mini-linear-2.0 and Ring-flash-linear-2.0.<n>Both models adopt a hybrid architecture that effectively integrates linear attention and softmax attention.<n>Compared to a 32 billion parameter dense model, this series reduces inference cost to 1/10, and compared to the original Ring series, the cost is also reduced by over 50%.
arXiv Detail & Related papers (2025-10-22T07:59:38Z) - Memory Enhanced Fractional-Order Dung Beetle Optimization for Photovoltaic Parameter Identification [8.924286864388922]
This paper proposes a Memory Enhanced Fractional-Order Dung Beetle Optimization (MFO-DBO) algorithm that integrates three coordinated strategies.<n>It consistently outperforms advanced DBO variants, FO-based algorithms, enhanced classical algorithms, and recent metaheuristics in terms of accuracy, robustness, convergence speed.
arXiv Detail & Related papers (2025-08-09T05:48:30Z) - A Gradient Meta-Learning Joint Optimization for Beamforming and Antenna Position in Pinching-Antenna Systems [63.213207442368294]
We consider a novel optimization design for multi-waveguide pinching-antenna systems.<n>The proposed GML-JO algorithm is robust to different choices and better performance compared with the existing optimization methods.
arXiv Detail & Related papers (2025-06-14T17:35:27Z) - KerZOO: Kernel Function Informed Zeroth-Order Optimization for Accurate and Accelerated LLM Fine-Tuning [15.81250204481401]
We introduce a kernel-function-based ZO framework aimed at mitigating gradient estimation bias.<n>KerZOO achieves comparable or superior performance to existing ZO baselines.<n>We show that the kernel function is an effective avenue for reducing estimation bias in ZO methods.
arXiv Detail & Related papers (2025-05-24T21:56:03Z) - Joint Transmit and Pinching Beamforming for Pinching Antenna Systems (PASS): Optimization-Based or Learning-Based? [89.05848771674773]
A novel antenna system ()-enabled downlink multi-user multiple-input single-output (MISO) framework is proposed.<n>It consists of multiple waveguides, which equip numerous low-cost antennas, named (PAs)<n>The positions of PAs can be reconfigured to both spanning large-scale path and space.
arXiv Detail & Related papers (2025-02-12T18:54:10Z) - DAPO-QAOA: An algorithm for solving combinatorial optimization problems by dynamically constructing phase operators [3.146466735747661]
We introduce a Dynamic Adaptive Phase Operator (DAPO) algorithm, which constructs phase operators based on the output of previous layers and neighborhood search approach.<n>DAPO achieves higher approximation ratios and significantly reduces two-qubit RZZ gates, especially in dense graphs.<n>Compared to vanilla QAOA, DAPO uses only 66% of RZZ gates at the same depth while delivering better results.
arXiv Detail & Related papers (2025-02-06T14:24:05Z) - Synergistic Development of Perovskite Memristors and Algorithms for Robust Analog Computing [53.77822620185878]
We propose a synergistic methodology to concurrently optimize perovskite memristor fabrication and develop robust analog DNNs.<n>We develop "BayesMulti", a training strategy utilizing BO-guided noise injection to improve the resistance of analog DNNs to memristor imperfections.<n>Our integrated approach enables use of analog computing in much deeper and wider networks, achieving up to 100-fold improvements.
arXiv Detail & Related papers (2024-12-03T19:20:08Z) - HELENE: Hessian Layer-wise Clipping and Gradient Annealing for Accelerating Fine-tuning LLM with Zeroth-order Optimization [18.00873866263434]
Fine-tuning large language models (LLMs) poses significant memory challenges.
Recent work, MeZO, addresses this issue using a zeroth-order (ZO) optimization method.
We introduce HELENE, a novel scalable and memory-efficient pre-conditioner.
arXiv Detail & Related papers (2024-11-16T04:27:22Z) - Optimizing Container Loading and Unloading through Dual-Cycling and Dockyard Rehandle Reduction Using a Hybrid Genetic Algorithm [1.350722707557839]
This paper addresses the NP-hard problem of optimizing container handling at ports by integrating Quay Crane Dual-Cycling (QCDC) and dockyard rehandle minimization.<n>We propose the Quay Crane Dual Cycle - Dockyard Rehandle Genetic Algorithm (QCDC-DR-GA), a hybrid Genetic Algorithm (GA) that holistically optimize both aspects.<n>Experiments on various ship sizes demonstrate that QCDC-DR-GA reduces total operation time by 15-20% for large ships compared to existing methods.
arXiv Detail & Related papers (2024-06-12T16:47:45Z) - Optimizing Large-Scale Hyperparameters via Automated Learning Algorithm [97.66038345864095]
We propose a new hyperparameter optimization method with zeroth-order hyper-gradients (HOZOG)
Specifically, we first formulate hyperparameter optimization as an A-based constrained optimization problem.
Then, we use the average zeroth-order hyper-gradients to update hyper parameters.
arXiv Detail & Related papers (2021-02-17T21:03:05Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.