Computationally Efficient Optimisation of Elbow-Type Draft Tube Using
Neural Network Surrogates
- URL: http://arxiv.org/abs/2401.08700v1
- Date: Sun, 14 Jan 2024 14:05:26 GMT
- Title: Computationally Efficient Optimisation of Elbow-Type Draft Tube Using
Neural Network Surrogates
- Authors: Ante Sikirica, Ivana Lu\v{c}in, Marta Alvir, Lado Kranj\v{c}evi\'c and
Zoran \v{C}arija
- Abstract summary: This study aims to provide a comprehensive assessment of single-objective and multi-objective optimisation algorithms for the design of an elbow-type draft tube.
The proposed workflow leverages deep neural network surrogates trained on data obtained from numerical simulations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study aims to provide a comprehensive assessment of single-objective and
multi-objective optimisation algorithms for the design of an elbow-type draft
tube, as well as to introduce a computationally efficient optimisation
workflow. The proposed workflow leverages deep neural network surrogates
trained on data obtained from numerical simulations. The use of surrogates
allows for a more flexible and faster evaluation of novel designs. The success
history-based adaptive differential evolution with linear reduction and the
multi-objective evolutionary algorithm based on decomposition were identified
as the best-performing algorithms and used to determine the influence of
different objectives in the single-objective optimisation and their combined
impact on the draft tube design in the multi-objective optimisation. The
results for the single-objective algorithm are consistent with those of the
multi-objective algorithm when the objectives are considered separately.
Multi-objective approach, however, should typically be chosen, especially for
computationally inexpensive surrogates. A multi-criteria decision analysis
method was used to obtain optimal multi-objective results, showing an
improvement of 1.5% and 17% for the pressure recovery factor and drag
coefficient, respectively. The difference between the predictions and the
numerical results is less than 0.5% for the pressure recovery factor and 3% for
the drag coefficient. As the demand for renewable energy continues to increase,
the relevance of data-driven optimisation workflows, as discussed in this
study, will become increasingly important, especially in the context of global
sustainability efforts.
Related papers
- Evolutionary Multi-Objective Optimisation for Fairness-Aware Self Adjusting Memory Classifiers in Data Streams [2.8366371519410887]
We introduce a novel approach to enhance fairness in machine learning algorithms applied to data stream classification.
The proposed approach integrates the strengths of the self-adjusting memory K-Nearest-Neighbour algorithm with evolutionary multi-objective optimisation.
We show that the proposed approach maintains competitive accuracy and significantly reduces discrimination.
arXiv Detail & Related papers (2024-04-18T10:59:04Z) - DADO -- Low-Cost Query Strategies for Deep Active Design Optimization [1.6298921134113031]
We present two selection strategies for self-optimization to reduce the computational cost in multi-objective design optimization problems.
We evaluate our strategies on a large dataset from the domain of fluid dynamics and introduce two new evaluation metrics to determine the model's performance.
arXiv Detail & Related papers (2023-07-10T13:01:27Z) - Evolutionary Solution Adaption for Multi-Objective Metal Cutting Process
Optimization [59.45414406974091]
We introduce a framework for system flexibility that allows us to study the ability of an algorithm to transfer solutions from previous optimization tasks.
We study the flexibility of NSGA-II, which we extend by two variants: 1) varying goals, that optimize solutions for two tasks simultaneously to obtain in-between source solutions expected to be more adaptable, and 2) active-inactive genotype, that accommodates different possibilities that can be activated or deactivated.
Results show that adaption with standard NSGA-II greatly reduces the number of evaluations required for optimization to a target goal, while the proposed variants further improve the adaption costs.
arXiv Detail & Related papers (2023-05-31T12:07:50Z) - Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Multi-objective hyperparameter optimization with performance uncertainty [62.997667081978825]
This paper presents results on multi-objective hyperparameter optimization with uncertainty on the evaluation of Machine Learning algorithms.
We combine the sampling strategy of Tree-structured Parzen Estimators (TPE) with the metamodel obtained after training a Gaussian Process Regression (GPR) with heterogeneous noise.
Experimental results on three analytical test functions and three ML problems show the improvement over multi-objective TPE and GPR.
arXiv Detail & Related papers (2022-09-09T14:58:43Z) - Dynamic Iterative Refinement for Efficient 3D Hand Pose Estimation [87.54604263202941]
We propose a tiny deep neural network of which partial layers are iteratively exploited for refining its previous estimations.
We employ learned gating criteria to decide whether to exit from the weight-sharing loop, allowing per-sample adaptation in our model.
Our method consistently outperforms state-of-the-art 2D/3D hand pose estimation approaches in terms of both accuracy and efficiency for widely used benchmarks.
arXiv Detail & Related papers (2021-11-11T23:31:34Z) - Batched Data-Driven Evolutionary Multi-Objective Optimization Based on
Manifold Interpolation [6.560512252982714]
We propose a framework for implementing batched data-driven evolutionary multi-objective optimization.
It is so general that any off-the-shelf evolutionary multi-objective optimization algorithms can be applied in a plug-in manner.
Our proposed framework is featured with a faster convergence and a stronger resilience to various PF shapes.
arXiv Detail & Related papers (2021-09-12T23:54:26Z) - Transfer Learning Based Co-surrogate Assisted Evolutionary Bi-objective
Optimization for Objectives with Non-uniform Evaluation Times [9.139734850798124]
Multiobjetive evolutionary algorithms assume that each objective function can be evaluated within the same period of time.
A co-surrogate is adopted to model the functional relationship between the fast and slow objective functions.
A transferable instance selection method is introduced to acquire useful knowledge from the search process of the fast objective.
arXiv Detail & Related papers (2021-08-30T16:10:15Z) - Solving Large-Scale Multi-Objective Optimization via Probabilistic
Prediction Model [10.916384208006157]
An efficient LSMOP algorithm should have the ability to escape the local optimal solution from the huge search space.
Maintaining the diversity of the population is one of the effective ways to improve search efficiency.
We propose a probabilistic prediction model based on trend prediction model and generating-filtering strategy, called LT-PPM, to tackle the LSMOP.
arXiv Detail & Related papers (2021-07-16T09:43:35Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.