Floorplanning of VLSI by Mixed-Variable Optimization
- URL: http://arxiv.org/abs/2401.15317v1
- Date: Sat, 27 Jan 2024 06:34:16 GMT
- Title: Floorplanning of VLSI by Mixed-Variable Optimization
- Authors: Jian Sun and Huabin Cheng and Jian Wu and Zhanyang Zhu and Yu Chen
- Abstract summary: This paper proposes memetic algorithms to solve mixed-variable floorplanning problems.
Proposed algorithms are superior to some celebrated B*-tree based floorplanning algorithms.
- Score: 42.82770651937298
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: By formulating the floorplanning of VLSI as a mixed-variable optimization
problem, this paper proposes to solve it by memetic algorithms, where the
discrete orientation variables are addressed by the distribution evolutionary
algorithm based on a population of probability model (DEA-PPM), and the
continuous coordination variables are optimized by the conjugate sub-gradient
algorithm (CSA). Accordingly, the fixed-outline floorplanning algorithm based
on CSA and DEA-PPM (FFA-CD) and the floorplanning algorithm with golden section
strategy (FA-GSS) are proposed for the floorplanning problems with and without
fixed-outline constraint. %FF-CD is committed to optimizing wirelength targets
within a fixed profile. FA-GSS uses the Golden Section strategy to optimize
both wirelength and area targets. The CSA is used to solve the proposed
non-smooth optimization model, and the DEA-PPM is used to explore the module
rotation scheme to enhance the flexibility of the algorithm. Numerical
experiments on GSRC test circuits show that the proposed algorithms are
superior to some celebrated B*-tree based floorplanning algorithms, and are
expected to be applied to large-scale floorplanning problems due to their low
time complexity.
Related papers
- Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Implicitly an Adversarial Regularizer [52.09480867526656]
We identify the source of misalignment as a form of distributional shift and uncertainty in learning human preferences.
To mitigate overoptimization, we first propose a theoretical algorithm that chooses the best policy for an adversarially chosen reward model.
Using the equivalence between reward models and the corresponding optimal policy, the algorithm features a simple objective that combines a preference optimization loss and a supervised learning loss.
arXiv Detail & Related papers (2024-05-26T05:38:50Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - PARL: A Unified Framework for Policy Alignment in Reinforcement Learning from Human Feedback [106.63518036538163]
We present a novel unified bilevel optimization-based framework, textsfPARL, formulated to address the recently highlighted critical issue of policy alignment in reinforcement learning.
Our framework addressed these concerns by explicitly parameterizing the distribution of the upper alignment objective (reward design) by the lower optimal variable.
Our empirical results substantiate that the proposed textsfPARL can address the alignment concerns in RL by showing significant improvements.
arXiv Detail & Related papers (2023-08-03T18:03:44Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Adaptive Stochastic Optimisation of Nonconvex Composite Objectives [2.1700203922407493]
We propose and analyse a family of generalised composite mirror descent algorithms.
With adaptive step sizes, the proposed algorithms converge without requiring prior knowledge of the problem.
We exploit the low-dimensional structure of the decision sets for high-dimensional problems.
arXiv Detail & Related papers (2022-11-21T18:31:43Z) - Asynchronous Distributed Reinforcement Learning for LQR Control via Zeroth-Order Block Coordinate Descent [7.6860514640178]
We propose a novel zeroth-order optimization algorithm for distributed reinforcement learning.
It allows each agent to estimate its local gradient by cost evaluation independently, without use of any consensus protocol.
arXiv Detail & Related papers (2021-07-26T18:11:07Z) - Deep Reinforcement Learning for Field Development Optimization [0.0]
In this work, the goal is to apply convolutional neural network-based (CNN) deep reinforcement learning (DRL) algorithms to the field development optimization problem.
The proximal policy optimization (PPO) algorithm is considered with two CNN architectures of varying number of layers and composition.
Both networks obtained policies that provide satisfactory results when compared to a hybrid particle swarm optimization - mesh adaptive direct search (PSO-MADS) algorithm.
arXiv Detail & Related papers (2020-08-05T06:26:13Z) - A Gradient-Aware Search Algorithm for Constrained Markov Decision
Processes [9.728259735794987]
We first prove that the optimization objective in the dual linear program of a finite CMDP is a piece-wise linear convex function with respect to the Lagrange penalty multipliers.
We propose a novel two-level Gradient-Aware Search (GAS) algorithm which exploits the PWLC structure to find the optimal state-value function and Lagrange penalty multipliers of a finite CMDP.
arXiv Detail & Related papers (2020-05-07T19:38:09Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.