A Lattice-based Method for Optimization in Continuous Spaces with Genetic Algorithms
- URL: http://arxiv.org/abs/2410.12188v1
- Date: Wed, 16 Oct 2024 03:14:09 GMT
- Title: A Lattice-based Method for Optimization in Continuous Spaces with Genetic Algorithms
- Authors: Cameron D. Harris, Kevin B. Schroeder, Jonathan Black,
- Abstract summary: This work presents a novel lattice-based methodology for incorporating multidimensional constraints into continuous decision variables.
The proposed approach consolidates established transcription techniques for crossover of continuous decision variables.
It aims to leverage domain knowledge and guide the search process towards feasible regions of the design space.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work presents a novel lattice-based methodology for incorporating multidimensional constraints into continuous decision variables within a genetic algorithm (GA) framework. The proposed approach consolidates established transcription techniques for crossover of continuous decision variables, aiming to leverage domain knowledge and guide the search process towards feasible regions of the design space. This work offers a robust and general purpose lattice-based GA that is applicable to a broad range of optimization problems. Monte Carlo analysis demonstrates that lattice-based methods find solutions two orders of magnitude closer to optima in fewer generations. The effectiveness of the lattice-based approach is showcased through two illustrative multi-objective design problems: (1) optimal telescope placement for astrophotography and (2) optimal design of a satellite constellation for maximizing ground station access. The optimal telescope placement example shows that lattice-based methods converge to the Pareto front in 15% fewer generations than traditional methods. The orbit design example shows that lattice-based methods discover an order of magnitude more Pareto-optimal solutions than traditional methods in a highly constrained design space. Overall, the results show that the lattice-based method exhibits enhanced exploration capabilities, traversing the solution space more comprehensively and achieving faster convergence compared to conventional GAs.
Related papers
- A Gradient Meta-Learning Joint Optimization for Beamforming and Antenna Position in Pinching-Antenna Systems [63.213207442368294]
We consider a novel optimization design for multi-waveguide pinching-antenna systems.<n>The proposed GML-JO algorithm is robust to different choices and better performance compared with the existing optimization methods.
arXiv Detail & Related papers (2025-06-14T17:35:27Z) - Gradient Methods with Online Scaling Part I. Theoretical Foundations [19.218484733179356]
This paper establishes the theoretical foundations of the online scaled methods (OSGM)<n>OSGM quantifies the effectiveness of a stepsize by a feedback function motivated from a convergence measure and uses the feedback to adjust the stepsize through an online learning algorithm.<n>OSGM yields desirable convergence guarantees on smooth convex problems, including 1) trajectory-dependent global convergence on smooth convex objectives; 2) an improved complexity result on smooth strongly convex problems, and 3) local superlinear convergence.
arXiv Detail & Related papers (2025-05-29T04:35:21Z) - Differential Evolution for Grassmann Manifold Optimization: A Projection Approach [0.0]
We propose a novel evolutionary algorithm for real-valued objective functions defined on the Grassmann manifold Gr(k,n)
Our approach incorporates a projection that maps vectors onto the manifold via decomposition.
arXiv Detail & Related papers (2025-03-27T21:04:19Z) - A Quantum Genetic Algorithm Framework for the MaxCut Problem [49.59986385400411]
The proposed method introduces a Quantum Genetic Algorithm (QGA) using a Grover-based evolutionary framework and divide-and-conquer principles.
On complete graphs, the proposed method consistently achieves the true optimal MaxCut values, outperforming the Semidefinite Programming (SDP) approach.
On ErdHos-R'enyi random graphs, the QGA demonstrates competitive performance, achieving median solutions within 92-96% of the SDP results.
arXiv Detail & Related papers (2025-01-02T05:06:16Z) - A Learn-to-Optimize Approach for Coordinate-Wise Step Sizes for Quasi-Newton Methods [9.82454981262489]
We introduce a learn-to-optimize (L2O) method that employs LSTM-based networks to learn optimal step sizes.<n>Our approach achieves substantial improvements over scalar step size methods and hypergradient descent-based method.
arXiv Detail & Related papers (2024-11-25T07:13:59Z) - Integrating Chaotic Evolutionary and Local Search Techniques in Decision Space for Enhanced Evolutionary Multi-Objective Optimization [1.8130068086063336]
This paper focuses on both Single-Objective Multi-Modal Optimization (SOMMOP) and Multi-Objective Optimization (MOO)
In SOMMOP, we integrate chaotic evolution with niching techniques, as well as Persistence-Based Clustering combined with Gaussian mutation.
For MOO, we extend these methods into a comprehensive framework that incorporates Uncertainty-Based Selection, Adaptive Tuning, and introduces a radius ( R ) concept in deterministic crowding.
arXiv Detail & Related papers (2024-11-12T15:18:48Z) - $ψ$DAG: Projected Stochastic Approximation Iteration for DAG Structure Learning [6.612096312467342]
Learning the structure of Directed A Graphs (DAGs) presents a significant challenge due to the vast search space of possible graphs, which scales with the number of nodes.
Recent advancements have redefined this problem as a continuous optimization task by incorporating differentiable a exponentiallyity constraints.
We present a novel framework for learning DAGs, employing a Approximation approach integrated with Gradient Descent (SGD)-based optimization techniques.
arXiv Detail & Related papers (2024-10-31T12:13:11Z) - Analyzing and Overcoming Local Optima in Complex Multi-Objective Optimization by Decomposition-Based Evolutionary Algorithms [5.153202024713228]
Multi-objective Evolutionary Algorithms (MOEADs) often converge to local optima, limiting solution diversity.
We introduce an innovative RP selection strategy, the Vector-Guided Weight-Hybrid method, designed to overcome the local optima issue.
Our research comprises two main experimental components: an ablation involving 14 algorithms within the MOEADs framework from 2014 to 2022 to validate our theoretical framework, and a series empirical tests to evaluate the effectiveness of our proposed method against both traditional and cutting-edge alternatives.
arXiv Detail & Related papers (2024-04-12T14:29:45Z) - Double Duality: Variational Primal-Dual Policy Optimization for
Constrained Reinforcement Learning [132.7040981721302]
We study the Constrained Convex Decision Process (MDP), where the goal is to minimize a convex functional of the visitation measure.
Design algorithms for a constrained convex MDP faces several challenges, including handling the large state space.
arXiv Detail & Related papers (2024-02-16T16:35:18Z) - Enhancing Gaussian Process Surrogates for Optimization and Posterior Approximation via Random Exploration [2.984929040246293]
novel noise-free Bayesian optimization strategies that rely on a random exploration step to enhance the accuracy of Gaussian process surrogate models.
New algorithms retain the ease of implementation of the classical GP-UCB, but an additional exploration step facilitates their convergence.
arXiv Detail & Related papers (2024-01-30T14:16:06Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - Acceleration Methods [57.202881673406324]
We first use quadratic optimization problems to introduce two key families of acceleration methods.
We discuss momentum methods in detail, starting with the seminal work of Nesterov.
We conclude by discussing restart schemes, a set of simple techniques for reaching nearly optimal convergence rates.
arXiv Detail & Related papers (2021-01-23T17:58:25Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - Cogradient Descent for Bilinear Optimization [124.45816011848096]
We introduce a Cogradient Descent algorithm (CoGD) to address the bilinear problem.
We solve one variable by considering its coupling relationship with the other, leading to a synchronous gradient descent.
Our algorithm is applied to solve problems with one variable under the sparsity constraint.
arXiv Detail & Related papers (2020-06-16T13:41:54Z) - IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method [64.15649345392822]
We introduce a framework for designing primal methods under the decentralized optimization setting where local functions are smooth and strongly convex.
Our approach consists of approximately solving a sequence of sub-problems induced by the accelerated augmented Lagrangian method.
When coupled with accelerated gradient descent, our framework yields a novel primal algorithm whose convergence rate is optimal and matched by recently derived lower bounds.
arXiv Detail & Related papers (2020-06-11T18:49:06Z) - Orthant Based Proximal Stochastic Gradient Method for
$\ell_1$-Regularized Optimization [35.236001071182855]
Sparsity-inducing regularization problems are ubiquitous in machine learning applications.
We present a novel method -- Orthanximal Gradient Method (OBProx-SG)
arXiv Detail & Related papers (2020-04-07T18:23:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.