Variable Functioning and Its Application to Large Scale Steel Frame
Design Optimization
- URL: http://arxiv.org/abs/2205.07274v1
- Date: Sun, 15 May 2022 12:43:25 GMT
- Title: Variable Functioning and Its Application to Large Scale Steel Frame
Design Optimization
- Authors: Amir H Gandomi, Kalyanmoy Deb, Ronald C Averill, Shahryar Rahnamayan,
Mohammad Nabi Omidvar
- Abstract summary: A concept-based approach called variable functioning Fx is introduced to reduce the optimization variables and narrow down the search space.
By using problem structure analysis technique and engineering expert knowledge, the $Fx$ method is used to enhance the steel frame design optimization process.
- Score: 15.86197261674868
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To solve complex real-world problems, heuristics and concept-based approaches
can be used in order to incorporate information into the problem. In this
study, a concept-based approach called variable functioning Fx is introduced to
reduce the optimization variables and narrow down the search space. In this
method, the relationships among one or more subset of variables are defined
with functions using information prior to optimization; thus, instead of
modifying the variables in the search process, the function variables are
optimized. By using problem structure analysis technique and engineering expert
knowledge, the $Fx$ method is used to enhance the steel frame design
optimization process as a complex real-world problem. The proposed approach is
coupled with particle swarm optimization and differential evolution algorithms
and used for three case studies. The algorithms are applied to optimize the
case studies by considering the relationships among column cross-section areas.
The results show that $Fx$ can significantly improve both the convergence rate
and the final design of a frame structure, even if it is only used for seeding.
Related papers
- Learning Joint Models of Prediction and Optimization [56.04498536842065]
Predict-Then-Then framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models.
arXiv Detail & Related papers (2024-09-07T19:52:14Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Tensor Train for Global Optimization Problems in Robotics [6.702251803443858]
The convergence of many numerical optimization techniques is highly dependent on the initial guess given to the solver.
We propose a novel approach that utilizes methods to initialize existing optimization solvers near global optima.
We show that the proposed method can generate samples close to global optima and from multiple modes.
arXiv Detail & Related papers (2022-06-10T13:18:26Z) - iDARTS: Differentiable Architecture Search with Stochastic Implicit
Gradients [75.41173109807735]
Differentiable ARchiTecture Search (DARTS) has recently become the mainstream of neural architecture search (NAS)
We tackle the hypergradient computation in DARTS based on the implicit function theorem.
We show that the architecture optimisation with the proposed method, named iDARTS, is expected to converge to a stationary point.
arXiv Detail & Related papers (2021-06-21T00:44:11Z) - Quantum variational optimization: The role of entanglement and problem
hardness [0.0]
We study the role of entanglement, the structure of the variational quantum circuit, and the structure of the optimization problem.
Our numerical results indicate an advantage in adapting the distribution of entangling gates to the problem's topology.
We find evidence that applying conditional value at risk type cost functions improves the optimization, increasing the probability of overlap with the optimal solutions.
arXiv Detail & Related papers (2021-03-26T14:06:54Z) - Particle Swarm Optimization: Fundamental Study and its Application to
Optimization and to Jetty Scheduling Problems [0.0]
The advantages of evolutionary algorithms with respect to traditional methods have been greatly discussed in the literature.
While particle swarms share such advantages, they outperform evolutionary algorithms in that they require lower computational cost and easier implementation.
This paper does not intend to study their tuning, general-purpose settings are taken from previous studies, and virtually the same algorithm is used to optimize a variety of notably different problems.
arXiv Detail & Related papers (2021-01-25T02:06:30Z) - Divide and Learn: A Divide and Conquer Approach for Predict+Optimize [50.03608569227359]
The predict+optimize problem combines machine learning ofproblem coefficients with a optimization prob-lem that uses the predicted coefficients.
We show how to directlyexpress the loss of the optimization problem in terms of thepredicted coefficients as a piece-wise linear function.
We propose a novel divide and algorithm to tackle optimization problems without this restriction and predict itscoefficients using the optimization loss.
arXiv Detail & Related papers (2020-12-04T00:26:56Z) - Bayesian Variational Optimization for Combinatorial Spaces [0.0]
Broad applications include the study of molecules, proteins, DNA, device structures and quantum circuit designs.
A on optimization over categorical spaces is needed to find optimal or pareto-optimal solutions.
We introduce a variational Bayesian optimization method that combines variational optimization and continuous relaxations.
arXiv Detail & Related papers (2020-11-03T20:56:13Z) - Bayesian optimization of variable-size design space problems [0.0]
Two alternative Bayesian Optimization-based approaches are proposed in order to solve this type of optimization problems.
The first approach consists in a budget allocation strategy allowing to focus the computational budget on the most promising design sub-spaces.
The second approach, instead, is based on the definition of a kernel function allowing to compute the covariance between samples characterized by partially different sets of variables.
arXiv Detail & Related papers (2020-03-06T16:30:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.