An Eigenspace Divide-and-Conquer Approach for Large-Scale Optimization
- URL: http://arxiv.org/abs/2004.02115v1
- Date: Sun, 5 Apr 2020 07:29:44 GMT
- Title: An Eigenspace Divide-and-Conquer Approach for Large-Scale Optimization
- Authors: Zhigang Ren, Yongsheng Liang, Muyi Wang, Yang Yang, An Chen
- Abstract summary: Divide-and-conquer (DC) evolutionary algorithms have achieved notable success in dealing with large-scale optimization problems.
This study proposes an eigenspace divide-and-conquer (EDC) approach.
- Score: 9.501723707464432
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Divide-and-conquer-based (DC-based) evolutionary algorithms (EAs) have
achieved notable success in dealing with large-scale optimization problems
(LSOPs). However, the appealing performance of this type of algorithms
generally requires a high-precision decomposition of the optimization problem,
which is still a challenging task for existing decomposition methods. This
study attempts to address the above issue from a different perspective and
proposes an eigenspace divide-and-conquer (EDC) approach. Different from
existing DC-based algorithms that perform decomposition and optimization in the
original decision space, EDC first establishes an eigenspace by conducting
singular value decomposition on a set of high-quality solutions selected from
recent generations. Then it transforms the optimization problem into the
eigenspace, and thus significantly weakens the dependencies among the
corresponding eigenvariables. Accordingly, these eigenvariables can be
efficiently grouped by a simple random strategy and each of the resulting
subproblems can be addressed more easily by a traditional EA. To verify the
efficiency of EDC, comprehensive experimental studies were conducted on two
sets of benchmark functions. Experimental results indicate that EDC is robust
to its parameters and has good scalability to the problem dimension. The
comparison with several state-of-the-art algorithms further confirms that EDC
is pretty competitive and performs better on complicated LSOPs.
Related papers
- Dynamic Incremental Optimization for Best Subset Selection [15.8362578568708]
Best subset selection is considered the gold standard for many learning problems.
An efficient subset-dual algorithm is developed based on the primal and dual problem structures.
arXiv Detail & Related papers (2024-02-04T02:26:40Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Learning Adaptive Evolutionary Computation for Solving Multi-Objective
Optimization Problems [3.3266268089678257]
This paper proposes a framework that integrates MOEAs with adaptive parameter control using Deep Reinforcement Learning (DRL)
The DRL policy is trained to adaptively set the values that dictate the intensity and probability of mutation for solutions during optimization.
We show the learned policy is transferable, i.e., the policy trained on a simple benchmark problem can be directly applied to solve the complex warehouse optimization problem.
arXiv Detail & Related papers (2022-11-01T22:08:34Z) - Best Subset Selection with Efficient Primal-Dual Algorithm [24.568094642425837]
Best subset selection is considered the gold standard' for many learning problems.
In this paper, we investigate the dual forms of a family of $ell$-regularized problems.
An efficient primal-dual method has been developed based on the primal and dual problem structures.
arXiv Detail & Related papers (2022-07-05T14:07:52Z) - A novel multiobjective evolutionary algorithm based on decomposition and
multi-reference points strategy [14.102326122777475]
Multiobjective evolutionary algorithm based on decomposition (MOEA/D) has been regarded as a significantly promising approach for solving multiobjective optimization problems (MOPs)
We propose an improved MOEA/D algorithm by virtue of the well-known Pascoletti-Serafini scalarization method and a new strategy of multi-reference points.
arXiv Detail & Related papers (2021-10-27T02:07:08Z) - Solving Large-Scale Multi-Objective Optimization via Probabilistic
Prediction Model [10.916384208006157]
An efficient LSMOP algorithm should have the ability to escape the local optimal solution from the huge search space.
Maintaining the diversity of the population is one of the effective ways to improve search efficiency.
We propose a probabilistic prediction model based on trend prediction model and generating-filtering strategy, called LT-PPM, to tackle the LSMOP.
arXiv Detail & Related papers (2021-07-16T09:43:35Z) - Bilevel Optimization: Convergence Analysis and Enhanced Design [63.64636047748605]
Bilevel optimization is a tool for many machine learning problems.
We propose a novel stoc-efficientgradient estimator named stoc-BiO.
arXiv Detail & Related papers (2020-10-15T18:09:48Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z) - Follow the bisector: a simple method for multi-objective optimization [65.83318707752385]
We consider optimization problems, where multiple differentiable losses have to be minimized.
The presented method computes descent direction in every iteration to guarantee equal relative decrease of objective functions.
arXiv Detail & Related papers (2020-07-14T09:50:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.