Optimization is Not Enough: Why Problem Formulation Deserves Equal Attention
- URL: http://arxiv.org/abs/2602.05466v1
- Date: Thu, 05 Feb 2026 09:15:19 GMT
- Title: Optimization is Not Enough: Why Problem Formulation Deserves Equal Attention
- Authors: Iván Olarte Rodríguez, Gokhan Serhat, Mariusz Bujny, Fabian Duddeck, Thomas Bäck, Elena Raponi,
- Abstract summary: Black-box optimization is increasingly used in engineering design problems where simulation-based evaluations are costly and gradients are unavailable.<n>We show that context-agnostic strategies consistently lead to suboptimal or non-physical designs.<n>We motivate the development of new black-box benchmarks that reward physically informed and context-aware optimization strategies.
- Score: 1.6516446394328081
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Black-box optimization is increasingly used in engineering design problems where simulation-based evaluations are costly and gradients are unavailable. In this context, the optimization community has largely analyzed algorithm performance in context-free setups, while not enough attention has been devoted to how problem formulation and domain knowledge may affect the optimization outcomes. We address this gap through a case study in the topology optimization of laminated composite structures, formulated as a black-box optimization problem. Specifically, we consider the design of a cantilever beam under a volume constraint, intending to minimize compliance while optimizing both the structural topology and fiber orientations. To assess the impact of problem formulation, we explicitly separate topology and material design variables and compare two strategies: a concurrent approach that optimizes all variables simultaneously without leveraging physical insight, and a sequential approach that optimizes variables of the same nature in stages. Our results show that context-agnostic strategies consistently lead to suboptimal or non-physical designs. In contrast, the sequential strategy yields better-performing and more interpretable solutions. These findings underscore the value of incorporating, when available, domain knowledge into the optimization process and motivate the development of new black-box benchmarks that reward physically informed and context-aware optimization strategies.
Related papers
- Investigating the Interplay of Parameterization and Optimizer in Gradient-Free Topology Optimization: A Cantilever Beam Case Study [1.7414095108022616]
This study investigates the interplay through a minimization problem for a cantilever beam subject to a connectivity constraint.<n>We benchmark three geometric parameterizations, each combined with three representative BBO algorithms.<n>Results reveal that parameterization quality has a stronger influence on optimization performance than choice.
arXiv Detail & Related papers (2026-01-29T19:09:05Z) - Optimizing Optimizers for Fast Gradient-Based Learning [53.81268610971847]
We lay the theoretical foundation for automating design in gradient-based learning.<n>By treating gradient loss signals as a function that translates to parameter motions, the problem reduces to a family of convex optimization problems.
arXiv Detail & Related papers (2025-12-06T09:50:41Z) - Deep Unfolding: Recent Developments, Theory, and Design Guidelines [99.63555420898554]
This article provides a tutorial-style overview of deep unfolding, a framework that transforms optimization algorithms into structured, trainable ML architectures.<n>We review the foundations of optimization for inference and for learning, introduce four representative design paradigms for deep unfolding, and discuss the distinctive training schemes that arise from their iterative nature.
arXiv Detail & Related papers (2025-12-03T13:16:35Z) - MECHBench: A Set of Black-Box Optimization Benchmarks originated from Structural Mechanics [1.4757194159888734]
This paper presents a curated set of optimization benchmarks rooted in structural mechanics.<n>The benchmarks are derived from vehicle crashworthiness scenarios.<n>Within this paper, the reader will find descriptions of the physical context of each case, the corresponding optimization problem formulations, and clear guidelines on how to employ the suite.
arXiv Detail & Related papers (2025-11-13T21:43:59Z) - A novel design update framework for topology optimization with quantum annealing: Application to truss and continuum structures [0.0]
This paper presents a novel design update strategy for topology optimization, as an iterative optimization.<n>The key contribution lies in incorporating a design updater concept with quantum annealing, applicable to both truss and continuum structures.<n>Results indicate that the proposed framework successfully finds optimal topologies similar to benchmark results.
arXiv Detail & Related papers (2024-06-27T02:07:38Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - DADO -- Low-Cost Query Strategies for Deep Active Design Optimization [1.6298921134113031]
We present two selection strategies for self-optimization to reduce the computational cost in multi-objective design optimization problems.
We evaluate our strategies on a large dataset from the domain of fluid dynamics and introduce two new evaluation metrics to determine the model's performance.
arXiv Detail & Related papers (2023-07-10T13:01:27Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - DEBOSH: Deep Bayesian Shape Optimization [48.80431740983095]
We propose a novel uncertainty-based method tailored to shape optimization.
It enables effective BO and increases the quality of the resulting shapes beyond that of state-of-the-art approaches.
arXiv Detail & Related papers (2021-09-28T11:01:42Z) - Unified Convergence Analysis for Adaptive Optimization with Moving Average Estimator [75.05106948314956]
We show that an increasing large momentum parameter for the first-order moment is sufficient for adaptive scaling.<n>We also give insights for increasing the momentum in a stagewise manner in accordance with stagewise decreasing step size.
arXiv Detail & Related papers (2021-04-30T08:50:24Z) - Quantum variational optimization: The role of entanglement and problem
hardness [0.0]
We study the role of entanglement, the structure of the variational quantum circuit, and the structure of the optimization problem.
Our numerical results indicate an advantage in adapting the distribution of entangling gates to the problem's topology.
We find evidence that applying conditional value at risk type cost functions improves the optimization, increasing the probability of overlap with the optimal solutions.
arXiv Detail & Related papers (2021-03-26T14:06:54Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.