A Survey on Design-space Dimensionality Reduction Methods for Shape Optimization
- URL: http://arxiv.org/abs/2405.13944v2
- Date: Tue, 08 Apr 2025 14:23:57 GMT
- Title: A Survey on Design-space Dimensionality Reduction Methods for Shape Optimization
- Authors: Andrea Serani, Matteo Diez,
- Abstract summary: Design-space dimensionality reduction techniques tailored for shape optimization bridging traditional methods and cutting-edge technologies.<n>Studies show how they significantly mitigate the curse of dimensionality, streamline computational processes, and refine the design exploration and optimization of complex functional surfaces.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rapidly evolving field of engineering design of functional surfaces necessitates sophisticated tools to manage the inherent complexity of high-dimensional design spaces. This survey paper offers a scoping review, i.e., a literature mapping synthesis borrowed from clinical medicine, delving into the field of design-space dimensionality reduction techniques tailored for shape optimization, bridging traditional methods and cutting-edge technologies. Dissecting the spectrum of these techniques, from classical linear approaches like principal component analysis to more nuanced nonlinear methods such as autoencoders, the discussion extends to innovative physics-informed methods that integrate physical data into the dimensionality reduction process, enhancing the physical relevance and effectiveness of reduced design spaces. By integrating these methods into optimization frameworks, it is shown how they significantly mitigate the curse of dimensionality, streamline computational processes, and refine the design exploration and optimization of complex functional surfaces. The survey provides a classification of methods and highlights the transformative impact of these techniques in simplifying design challenges, thereby fostering more efficient and effective engineering solutions.
Related papers
- Efficient Design of Compliant Mechanisms Using Multi-Objective Optimization [50.24983453990065]
We address the synthesis of a compliant cross-hinge mechanism capable of large angular strokes.
We formulate a multi-objective optimization problem based on kinetostatic performance measures.
arXiv Detail & Related papers (2025-04-23T06:29:10Z) - Generalized Tensor-based Parameter-Efficient Fine-Tuning via Lie Group Transformations [50.010924231754856]
Adapting pre-trained foundation models for diverse downstream tasks is a core practice in artificial intelligence.
To overcome this, parameter-efficient fine-tuning (PEFT) methods like LoRA have emerged and are becoming a growing research focus.
We propose a generalization that extends matrix-based PEFT methods to higher-dimensional parameter spaces without compromising their structural properties.
arXiv Detail & Related papers (2025-04-01T14:36:45Z) - Accelerated Gradient-based Design Optimization Via Differentiable Physics-Informed Neural Operator: A Composites Autoclave Processing Case Study [0.0]
We introduce a novel Physics-Informed DeepONet (PIDON) architecture to effectively model the nonlinear behavior of complex engineering systems.
We demonstrate the effectiveness of this framework in the optimization of aerospace-grade composites curing processes achieving a 3x speedup.
The proposed model has the potential to be used as a scalable and efficient optimization tool for broader applications in advanced engineering and digital twin systems.
arXiv Detail & Related papers (2025-02-17T07:11:46Z) - A Survey on Inference Optimization Techniques for Mixture of Experts Models [50.40325411764262]
Large-scale Mixture of Experts (MoE) models offer enhanced model capacity and computational efficiency through conditional computation.
deploying and running inference on these models presents significant challenges in computational resources, latency, and energy efficiency.
This survey analyzes optimization techniques for MoE models across the entire system stack.
arXiv Detail & Related papers (2024-12-18T14:11:15Z) - Learning Joint Models of Prediction and Optimization [56.04498536842065]
Predict-Then-Then framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models.
arXiv Detail & Related papers (2024-09-07T19:52:14Z) - An Uncertainty-aware Deep Learning Framework-based Robust Design Optimization of Metamaterial Units [14.660705962826718]
We propose a novel uncertainty-aware deep learning framework-based robust design approach for the design of metamaterial units.
We demonstrate that the proposed design approach is capable of designing high-performance metamaterial units with high reliability.
arXiv Detail & Related papers (2024-07-19T22:21:27Z) - Deep Optimal Experimental Design for Parameter Estimation Problems [4.097001355074171]
We investigate a new experimental design methodology that uses deep learning.
We show that the training of a network as a Likelihood Free Estor can be used to significantly simplify the design process.
Deep design improves the quality of the recovery process for parameter estimation problems.
arXiv Detail & Related papers (2024-06-20T05:13:33Z) - Compositional Generative Inverse Design [69.22782875567547]
Inverse design, where we seek to design input variables in order to optimize an underlying objective function, is an important problem.
We show that by instead optimizing over the learned energy function captured by the diffusion model, we can avoid such adversarial examples.
In an N-body interaction task and a challenging 2D multi-airfoil design task, we demonstrate that by composing the learned diffusion model at test time, our method allows us to design initial states and boundary shapes.
arXiv Detail & Related papers (2024-01-24T01:33:39Z) - Performance Comparison of Design Optimization and Deep Learning-based
Inverse Design [15.620304857903069]
This paper compares the performance of traditional design optimization methods with deep learning-based inverse design methods.
It provides guidelines that can be taken into account for the future utilization of deep learning-based inverse design.
arXiv Detail & Related papers (2023-08-23T06:04:54Z) - Aligning Optimization Trajectories with Diffusion Models for Constrained
Design Generation [17.164961143132473]
We introduce a learning framework that demonstrates the efficacy of aligning the sampling trajectory of diffusion models with the optimization trajectory derived from traditional physics-based methods.
Our method allows for generating feasible and high-performance designs in as few as two steps without the need for expensive preprocessing, external surrogate models, or additional labeled data.
Our results demonstrate that TA outperforms state-of-the-art deep generative models on in-distribution configurations and halves the inference computational cost.
arXiv Detail & Related papers (2023-05-29T09:16:07Z) - Diffusing the Optimal Topology: A Generative Optimization Approach [6.375982344506753]
Topology optimization seeks to find the best design that satisfies a set of constraints while maximizing system performance.
Traditional iterative optimization methods like SIMP can be computationally expensive and get stuck in local minima.
We propose a Generative Optimization method that integrates classic optimization like SIMP as a refining mechanism for the topology generated by a deep generative model.
arXiv Detail & Related papers (2023-03-17T03:47:10Z) - Inverse design of photonic devices with strict foundry fabrication
constraints [55.41644538483948]
We introduce a new method for inverse design of nanophotonic devices which guarantees that designs satisfy strict length scale constraints.
We demonstrate the performance and reliability of our method by designing several common integrated photonic components.
arXiv Detail & Related papers (2022-01-31T02:27:25Z) - Physical Gradients for Deep Learning [101.36788327318669]
We find that state-of-the-art training techniques are not well-suited to many problems that involve physical processes.
We propose a novel hybrid training approach that combines higher-order optimization methods with machine learning techniques.
arXiv Detail & Related papers (2021-09-30T12:14:31Z) - Conservative Objective Models for Effective Offline Model-Based
Optimization [78.19085445065845]
Computational design problems arise in a number of settings, from synthetic biology to computer architectures.
We propose a method that learns a model of the objective function that lower bounds the actual value of the ground-truth objective on out-of-distribution inputs.
COMs are simple to implement and outperform a number of existing methods on a wide range of MBO problems.
arXiv Detail & Related papers (2021-07-14T17:55:28Z) - Principal Component Analysis Applied to Gradient Fields in Band Gap
Optimization Problems for Metamaterials [0.7699714865575189]
This article describes the application of a related unsupervised machine learning technique, namely, principal component analysis, to approximate the gradient of the objective function of a band gap optimization problem for an acoustic metamaterial.
Numerical results show the effectiveness of the proposed method.
arXiv Detail & Related papers (2021-04-04T11:13:37Z) - An AI-Assisted Design Method for Topology Optimization Without
Pre-Optimized Training Data [68.8204255655161]
An AI-assisted design method based on topology optimization is presented, which is able to obtain optimized designs in a direct way.
Designs are provided by an artificial neural network, the predictor, on the basis of boundary conditions and degree of filling as input data.
arXiv Detail & Related papers (2020-12-11T14:33:27Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.