Reducing the Price of Stable Cable Stayed Bridges with CMA-ES
- URL: http://arxiv.org/abs/2304.00641v1
- Date: Sun, 2 Apr 2023 22:14:36 GMT
- Title: Reducing the Price of Stable Cable Stayed Bridges with CMA-ES
- Authors: Gabriel Fernandes and Nuno Louren\c{c}o and Jo\~ao Correia
- Abstract summary: CMA-ES is a better option for finding good solutions in the search space, beating the baseline with the same amount of evaluations.
In concrete, the CMA-ES approach is able to design bridges that are cheaper and structurally safe.
- Score: 0.2883903547507341
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The design of cable-stayed bridges requires the determination of several
design variables' values. Civil engineers usually perform this task by hand as
an iteration of steps that stops when the engineer is happy with both the cost
and maintaining the structural constraints of the solution. The problem's
difficulty arises from the fact that changing a variable may affect other
variables, meaning that they are not independent, suggesting that we are facing
a deceptive landscape. In this work, we compare two approaches to a baseline
solution: a Genetic Algorithm and a CMA-ES algorithm. There are two objectives
when designing the bridges: minimizing the cost and maintaining the structural
constraints in acceptable values to be considered safe. These are conflicting
objectives, meaning that decreasing the cost often results in a bridge that is
not structurally safe. The results suggest that CMA-ES is a better option for
finding good solutions in the search space, beating the baseline with the same
amount of evaluations, while the Genetic Algorithm could not. In concrete, the
CMA-ES approach is able to design bridges that are cheaper and structurally
safe.
Related papers
- Economic span selection of bridge based on deep reinforcement learning [1.4185188982404755]
Deep Q-network algorithm is used to select economic span of bridge.
Economic span of bridge is theoretically analyzed, and the theoretical solution formula of economic span is deduced.
arXiv Detail & Related papers (2024-07-09T02:27:52Z) - Superconstant Inapproximability of Decision Tree Learning [7.420043502440765]
We consider the task of properly PAC learning decision trees with queries.
Recent work of Koch, Strassle, and Tan showed that the strictest version of this task, where the hypothesis tree $T$ is required to be optimally small, is NP-hard.
We show that the task indeed remains NP-hard even if $T$ is allowed to be within any constant factor of optimal.
arXiv Detail & Related papers (2024-07-01T15:53:03Z) - SoK: Cross-Chain Bridging Architectural Design Flaws and Mitigations [2.490441444378203]
Cross-chain bridges are solutions that enable interoperability between heterogeneous blockchains.
In contrast to the underlying blockchains, the bridges often provide inferior security guarantees.
We have analysed 60 different bridges and 34 bridge exploits in the last three years.
arXiv Detail & Related papers (2024-03-01T09:50:56Z) - Structured Transforms Across Spaces with Cost-Regularized Optimal
Transport [25.684201757101263]
We exploit in this work a parallel between GW and cost-regularized OT, the regularized minimization of a linear OT objective parameterized by a ground cost.
We show that several quadratic OT problems fall in this category, and consider enforcing structure in linear transform.
We provide a proximal algorithm to extract such transforms from unaligned data, and demonstrate its applicability to single-cell spatial transcriptomics/multiomics matching tasks.
arXiv Detail & Related papers (2023-11-09T23:33:31Z) - Variance-Dependent Regret Bounds for Linear Bandits and Reinforcement
Learning: Adaptivity and Computational Efficiency [90.40062452292091]
We present the first computationally efficient algorithm for linear bandits with heteroscedastic noise.
Our algorithm is adaptive to the unknown variance of noise and achieves an $tildeO(d sqrtsum_k = 1K sigma_k2 + d)$ regret.
We also propose a variance-adaptive algorithm for linear mixture Markov decision processes (MDPs) in reinforcement learning.
arXiv Detail & Related papers (2023-02-21T00:17:24Z) - Markovian Sliced Wasserstein Distances: Beyond Independent Projections [51.80527230603978]
We introduce a new family of SW distances, named Markovian sliced Wasserstein (MSW) distance, which imposes a first-order Markov structure on projecting directions.
We compare distances with previous SW variants in various applications such as flows, color transfer, and deep generative modeling to demonstrate the favorable performance of MSW.
arXiv Detail & Related papers (2023-01-10T01:58:15Z) - Adaptive Multi-Goal Exploration [118.40427257364729]
We show how AdaGoal can be used to tackle the objective of learning an $epsilon$-optimal goal-conditioned policy.
AdaGoal is anchored in the high-level algorithmic structure of existing methods for goal-conditioned deep reinforcement learning.
arXiv Detail & Related papers (2021-11-23T17:59:50Z) - Linear Contextual Bandits with Adversarial Corruptions [91.38793800392108]
We study the linear contextual bandit problem in the presence of adversarial corruption.
We present a variance-aware algorithm that is adaptive to the level of adversarial contamination $C$.
arXiv Detail & Related papers (2021-10-25T02:53:24Z) - Recursive Causal Structure Learning in the Presence of Latent Variables
and Selection Bias [27.06618125828978]
We consider the problem of learning the causal MAG of a system from observational data in the presence of latent variables and selection bias.
We propose a novel computationally efficient constraint-based method that is sound and complete.
We provide experimental results to compare the proposed approach with the state of the art on both synthetic and real-world structures.
arXiv Detail & Related papers (2021-10-22T19:49:59Z) - Navigating to the Best Policy in Markov Decision Processes [68.8204255655161]
We investigate the active pure exploration problem in Markov Decision Processes.
Agent sequentially selects actions and, from the resulting system trajectory, aims at the best as fast as possible.
arXiv Detail & Related papers (2021-06-05T09:16:28Z) - Online Apprenticeship Learning [58.45089581278177]
In Apprenticeship Learning (AL), we are given a Markov Decision Process (MDP) without access to the cost function.
The goal is to find a policy that matches the expert's performance on some predefined set of cost functions.
We show that the OAL problem can be effectively solved by combining two mirror descent based no-regret algorithms.
arXiv Detail & Related papers (2021-02-13T12:57:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.