Novelty-Driven Binary Particle Swarm Optimisation for Truss Optimisation
Problems
- URL: http://arxiv.org/abs/2112.07875v1
- Date: Wed, 15 Dec 2021 04:30:30 GMT
- Title: Novelty-Driven Binary Particle Swarm Optimisation for Truss Optimisation
Problems
- Authors: Hirad Assimi, Frank Neumann, Markus Wagner, Xiaodong Li
- Abstract summary: Bilevel optimisation has been successfully applied to truss optimisation.
We introduce exact enumeration to rigorously analyse the topology search space.
We also propose novelty-driven binary particle swarm optimisation for bigger problems.
- Score: 16.341501082840747
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Topology optimisation of trusses can be formulated as a combinatorial and
multi-modal problem in which locating distinct optimal designs allows
practitioners to choose the best design based on their preferences. Bilevel
optimisation has been successfully applied to truss optimisation to consider
topology and sizing in upper and lower levels, respectively. We introduce exact
enumeration to rigorously analyse the topology search space and remove
randomness for small problems. We also propose novelty-driven binary particle
swarm optimisation for bigger problems to discover new designs at the upper
level by maximising novelty. For the lower level, we employ a reliable
evolutionary optimiser to tackle the layout configuration aspect of the
problem. We consider truss optimisation problem instances where designers need
to select the size of bars from a discrete set with respect to practice code
constraints. Our experimental investigations show that our approach outperforms
the current state-of-the-art methods and it obtains multiple high-quality
solutions.
Related papers
- Optimizing Optimizers for Fast Gradient-Based Learning [53.81268610971847]
We lay the theoretical foundation for automating design in gradient-based learning.<n>By treating gradient loss signals as a function that translates to parameter motions, the problem reduces to a family of convex optimization problems.
arXiv Detail & Related papers (2025-12-06T09:50:41Z) - Zeroth-Order Optimization Finds Flat Minima [51.41529512093436]
We show that zeroth-order optimization with the standard two-point estimator favors solutions with small trace of Hessian.<n>We further provide convergence rates of zeroth-order optimization to approximate flat minima for convex and sufficiently smooth functions.
arXiv Detail & Related papers (2025-06-05T17:59:09Z) - Improved Approximation Algorithms for Low-Rank Problems Using Semidefinite Optimization [2.1485350418225244]
We construct an analogous relax-then-sample strategy for low-rank optimization problems.
We derive a semidefinite relaxation and a randomized rounding scheme, which obtains near-optimal solutions.
We numerically illustrate the effectiveness and scalability of our relaxation and our sampling scheme.
arXiv Detail & Related papers (2025-01-06T11:31:41Z) - Bayesian Optimization of Bilevel Problems [0.0]
This paper focuses on bilevel optimization where both upper-level and lower-level functions are black boxes and expensive to evaluate.
We propose a Bayesian Optimization framework that models the upper and lower-level functions as Gaussian processes over the combined space of upper and lower-level decisions.
Our experimental results demonstrate that the proposed algorithm is highly sample-efficient and outperforms existing methods in finding high-quality solutions.
arXiv Detail & Related papers (2024-12-24T15:55:30Z) - Stochastic Zeroth-Order Optimization under Strongly Convexity and Lipschitz Hessian: Minimax Sample Complexity [59.75300530380427]
We consider the problem of optimizing second-order smooth and strongly convex functions where the algorithm is only accessible to noisy evaluations of the objective function it queries.
We provide the first tight characterization for the rate of the minimax simple regret by developing matching upper and lower bounds.
arXiv Detail & Related papers (2024-06-28T02:56:22Z) - Rapid quantum approaches for combinatorial optimisation inspired by
optimal state-transfer [3.591122855617648]
We propose a new design to tackle optimisation problems, inspired by Hamiltonians for optimal state-transfer.
We provide numerical evidence of the success of this new design.
arXiv Detail & Related papers (2023-01-17T12:45:09Z) - Nesterov Meets Optimism: Rate-Optimal Separable Minimax Optimization [108.35402316802765]
We propose a new first-order optimization algorithm -- AcceleratedGradient-OptimisticGradient (AG-OG) Ascent.
We show that AG-OG achieves the optimal convergence rate (up to a constant) for a variety of settings.
We extend our algorithm to extend the setting and achieve the optimal convergence rate in both bi-SC-SC and bi-C-SC settings.
arXiv Detail & Related papers (2022-10-31T17:59:29Z) - Tree ensemble kernels for Bayesian optimization with known constraints
over mixed-feature spaces [54.58348769621782]
Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search.
Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function.
Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.
arXiv Detail & Related papers (2022-07-02T16:59:37Z) - Automated Circuit Sizing with Multi-objective Optimization based on
Differential Evolution and Bayesian Inference [1.1579778934294358]
We introduce a design optimization method based on Generalized Differential Evolution 3 (GDE3) and Gaussian Processes (GPs)
The proposed method is able to perform sizing for complex circuits with a large number of design variables and many conflicting objectives to be optimized.
We evaluate the introduced method on two voltage regulators showing different levels of complexity.
arXiv Detail & Related papers (2022-06-06T06:48:45Z) - On Constraints in First-Order Optimization: A View from Non-Smooth
Dynamical Systems [99.59934203759754]
We introduce a class of first-order methods for smooth constrained optimization.
Two distinctive features of our approach are that projections or optimizations over the entire feasible set are avoided.
The resulting algorithmic procedure is simple to implement even when constraints are nonlinear.
arXiv Detail & Related papers (2021-07-17T11:45:13Z) - Constraint Programming to Discover One-Flip Local Optima of Quadratic
Unconstrained Binary Optimization Problems [0.5439020425819]
QUBO annealers as well as other solution approaches benefit from starting with a diverse set of solutions with local optimality.
This paper presents a new method for generating a set of one-flip local optima leveraging constraint programming.
arXiv Detail & Related papers (2021-04-04T22:55:25Z) - Intermediate Layer Optimization for Inverse Problems using Deep
Generative Models [86.29330440222199]
ILO is a novel optimization algorithm for solving inverse problems with deep generative models.
We empirically show that our approach outperforms state-of-the-art methods introduced in StyleGAN-2 and PULSE for a wide range of inverse problems.
arXiv Detail & Related papers (2021-02-15T06:52:22Z) - Mixed-Projection Conic Optimization: A New Paradigm for Modeling Rank
Constraints [3.179831861897336]
We provide a framework for solving low-rank optimization problems to certifiable optimality.
Our framework also provides near-optimal solutions through rounding and local search techniques.
arXiv Detail & Related papers (2020-09-22T08:59:06Z) - Adaptive Sampling of Pareto Frontiers with Binary Constraints Using
Regression and Classification [0.0]
We present a novel adaptive optimization algorithm for black-box multi-objective optimization problems with binary constraints.
Our method is based on probabilistic regression and classification models, which act as a surrogate for the optimization goals.
We also present a novel ellipsoid truncation method to speed up the expected hypervolume calculation.
arXiv Detail & Related papers (2020-08-27T09:15:02Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.