Evolutionary Solution Adaption for Multi-Objective Metal Cutting Process
Optimization
- URL: http://arxiv.org/abs/2305.19775v1
- Date: Wed, 31 May 2023 12:07:50 GMT
- Title: Evolutionary Solution Adaption for Multi-Objective Metal Cutting Process
Optimization
- Authors: Leo Francoso Dal Piccol Sotto, Sebastian Mayer, Hemanth Janarthanam,
Alexander Butz, Jochen Garcke
- Abstract summary: We introduce a framework for system flexibility that allows us to study the ability of an algorithm to transfer solutions from previous optimization tasks.
We study the flexibility of NSGA-II, which we extend by two variants: 1) varying goals, that optimize solutions for two tasks simultaneously to obtain in-between source solutions expected to be more adaptable, and 2) active-inactive genotype, that accommodates different possibilities that can be activated or deactivated.
Results show that adaption with standard NSGA-II greatly reduces the number of evaluations required for optimization to a target goal, while the proposed variants further improve the adaption costs.
- Score: 59.45414406974091
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Optimizing manufacturing process parameters is typically a multi-objective
problem with often contradictory objectives such as production quality and
production time. If production requirements change, process parameters have to
be optimized again. Since optimization usually requires costly simulations
based on, for example, the Finite Element method, it is of great interest to
have means to reduce the number of evaluations needed for optimization. To this
end, we consider optimizing for different production requirements from the
viewpoint of a framework for system flexibility that allows us to study the
ability of an algorithm to transfer solutions from previous optimization tasks,
which also relates to dynamic evolutionary optimization. Based on the extended
Oxley model for orthogonal metal cutting, we introduce a multi-objective
optimization benchmark where different materials define related optimization
tasks, and use it to study the flexibility of NSGA-II, which we extend by two
variants: 1) varying goals, that optimizes solutions for two tasks
simultaneously to obtain in-between source solutions expected to be more
adaptable, and 2) active-inactive genotype, that accommodates different
possibilities that can be activated or deactivated. Results show that adaption
with standard NSGA-II greatly reduces the number of evaluations required for
optimization to a target goal, while the proposed variants further improve the
adaption costs, although further work is needed towards making the methods
advantageous for real applications.
Related papers
- Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - Advancements in Optimization: Adaptive Differential Evolution with
Diversification Strategy [0.0]
The study employs single-objective optimization in a two-dimensional space and runs ADEDS on each of the benchmark functions with multiple iterations.
ADEDS consistently outperforms standard DE for a variety of optimization challenges, including functions with numerous local optima, plate-shaped, valley-shaped, stretched-shaped, and noisy functions.
arXiv Detail & Related papers (2023-10-02T10:05:41Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Optimal Design of Electric Machine with Efficient Handling of
Constraints and Surrogate Assistance [5.387300498478744]
This article proposes an optimization method incorporated into a popularly-used evolutionary multi-objective optimization algorithm - NSGA-II.
The proposed method exploits the inexpensiveness of geometric constraints to generate feasible designs by using a custom repair operator.
arXiv Detail & Related papers (2022-06-03T17:13:29Z) - Multi-objective robust optimization using adaptive surrogate models for
problems with mixed continuous-categorical parameters [0.0]
Robust design optimization is traditionally considered when uncertainties are mainly affecting the objective function.
The resulting nested optimization problem may be solved using a general-purpose solver, herein the non-dominated sorting genetic algorithm (NSGA-II)
The proposed approach consists of sequentially carrying out NSGA-II while using an adaptively built Kriging model to estimate the quantiles.
arXiv Detail & Related papers (2022-03-03T20:23:18Z) - Batched Data-Driven Evolutionary Multi-Objective Optimization Based on
Manifold Interpolation [6.560512252982714]
We propose a framework for implementing batched data-driven evolutionary multi-objective optimization.
It is so general that any off-the-shelf evolutionary multi-objective optimization algorithms can be applied in a plug-in manner.
Our proposed framework is featured with a faster convergence and a stronger resilience to various PF shapes.
arXiv Detail & Related papers (2021-09-12T23:54:26Z) - Optimization-Inspired Learning with Architecture Augmentations and
Control Mechanisms for Low-Level Vision [74.9260745577362]
This paper proposes a unified optimization-inspired learning framework to aggregate Generative, Discriminative, and Corrective (GDC) principles.
We construct three propagative modules to effectively solve the optimization models with flexible combinations.
Experiments across varied low-level vision tasks validate the efficacy and adaptability of GDC.
arXiv Detail & Related papers (2020-12-10T03:24:53Z) - Enhanced Innovized Repair Operator for Evolutionary Multi- and
Many-objective Optimization [5.885238773559015]
"Innovization" is a task of learning common relationships among some or all of the Pareto-optimal (PO) solutions in optimisation problems.
Recent studies have shown that a chronological sequence of non-dominated solutions also possess salient patterns that can be used to learn problem features.
We propose a machine-learning- (ML-) assisted modelling approach that learns the modifications in design variables needed to advance population members towards the Pareto-optimal set.
arXiv Detail & Related papers (2020-11-21T10:29:15Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.