Interaction-Aware Sensitivity Analysis for Aerodynamic Optimization
Results using Information Theory
- URL: http://arxiv.org/abs/2112.05609v1
- Date: Fri, 10 Dec 2021 15:41:56 GMT
- Title: Interaction-Aware Sensitivity Analysis for Aerodynamic Optimization
Results using Information Theory
- Authors: Patricia Wollstadt and Sebastian Schmitt
- Abstract summary: An important issue during an engineering design process is to develop an understanding which design parameters have the most influence on the performance.
We propose to use recently introduced information-theoretic methods and estimation algorithms to find the most influential input parameters in optimization results.
- Score: 0.07614628596146601
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: An important issue during an engineering design process is to develop an
understanding which design parameters have the most influence on the
performance. Especially in the context of optimization approaches this
knowledge is crucial in order to realize an efficient design process and
achieve high-performing results. Information theory provides powerful tools to
investigate these relationships because measures are model-free and thus also
capture non-linear relationships, while requiring only minimal assumptions on
the input data. We therefore propose to use recently introduced
information-theoretic methods and estimation algorithms to find the most
influential input parameters in optimization results. The proposed methods are
in particular able to account for interactions between parameters, which are
often neglected but may lead to redundant or synergistic contributions of
multiple parameters. We demonstrate the application of these methods on
optimization data from aerospace engineering, where we first identify the most
relevant optimization parameters using a recently introduced
information-theoretic feature-selection algorithm that accounts for
interactions between parameters. Second, we use the novel partial information
decomposition (PID) framework that allows to quantify redundant and synergistic
contributions between selected parameters with respect to the optimization
outcome to identify parameter interactions. We thus demonstrate the power of
novel information-theoretic approaches in identifying relevant parameters in
optimization runs and highlight how these methods avoid the selection of
redundant parameters, while detecting interactions that result in synergistic
contributions of multiple parameters.
Related papers
- End-to-End Optimal Detector Design with Mutual Information Surrogates [1.7042756021131187]
We introduce a novel approach for end-to-end black-box optimization of high energy physics detectors using local deep learning (DL) surrogates.
In addition to a standard reconstruction-based metric commonly used in the field, we investigate the information-theoretic metric of mutual information.
Our findings reveal three key insights: (1) end-toend black-box optimization using local surrogates is a practical and compelling approach for detector design; (2) mutual information-based optimization yields design choices that closely match those from state-of-the-art physics-informed methods; and (3) information-theoretic methods provide a
arXiv Detail & Related papers (2025-03-18T15:23:03Z) - Scrambling for precision: optimizing multiparameter qubit estimation in the face of sloppiness and incompatibility [0.0]
We explore the connection between sloppiness and incompatibility by introducing an adjustable scrambling operation for parameter encoding.
Through analytical optimization, we identify strategies to mitigate these constraints and enhance estimation efficiency.
arXiv Detail & Related papers (2025-03-11T09:57:51Z) - Optimize Incompatible Parameters through Compatibility-aware Knowledge Integration [104.52015641099828]
Existing research excels in removing such parameters or merging the outputs of multiple different pretrained models.
We propose Compatibility-aware Knowledge Integration (CKI), which consists of Deep Assessment and Deep Splicing.
The integrated model can be used directly for inference or for further fine-tuning.
arXiv Detail & Related papers (2025-01-10T01:42:43Z) - Scaling Exponents Across Parameterizations and Optimizers [94.54718325264218]
We propose a new perspective on parameterization by investigating a key assumption in prior work.
Our empirical investigation includes tens of thousands of models trained with all combinations of threes.
We find that the best learning rate scaling prescription would often have been excluded by the assumptions in prior work.
arXiv Detail & Related papers (2024-07-08T12:32:51Z) - Adaptive Preference Scaling for Reinforcement Learning with Human Feedback [103.36048042664768]
Reinforcement learning from human feedback (RLHF) is a prevalent approach to align AI systems with human values.
We propose a novel adaptive preference loss, underpinned by distributionally robust optimization (DRO)
Our method is versatile and can be readily adapted to various preference optimization frameworks.
arXiv Detail & Related papers (2024-06-04T20:33:22Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - A Unified Gaussian Process for Branching and Nested Hyperparameter
Optimization [19.351804144005744]
In deep learning, tuning parameters with conditional dependence are common in practice.
New GP model accounts for the dependent structure among input variables through a new kernel function.
High prediction accuracy and better optimization efficiency are observed in a series of synthetic simulations and real data applications of neural networks.
arXiv Detail & Related papers (2024-01-19T21:11:32Z) - On the Effectiveness of Parameter-Efficient Fine-Tuning [79.6302606855302]
Currently, many research works propose to only fine-tune a small portion of the parameters while keeping most of the parameters shared across different tasks.
We show that all of the methods are actually sparse fine-tuned models and conduct a novel theoretical analysis of them.
Despite the effectiveness of sparsity grounded by our theory, it still remains an open problem of how to choose the tunable parameters.
arXiv Detail & Related papers (2022-11-28T17:41:48Z) - Visualization and Optimization Techniques for High Dimensional Parameter
Spaces [4.111899441919165]
We propose a novel approach to create an auto-tuning framework for storage systems optimization combining both direct optimization techniques and visual analytics research.
Our system was developed in tight collaboration with a group of systems performance researchers and its final effectiveness was evaluated with expert interviews, a comparative user study, and two case studies.
arXiv Detail & Related papers (2022-04-28T23:01:15Z) - On the Parameter Combinations That Matter and on Those That do Not [0.0]
We present a data-driven approach to characterizing nonidentifiability of a model's parameters.
By employing Diffusion Maps and their extensions, we discover the minimal combinations of parameters required to characterize the dynamic output behavior.
arXiv Detail & Related papers (2021-10-13T13:46:23Z) - Towards a Unified View of Parameter-Efficient Transfer Learning [108.94786930869473]
Fine-tuning large pre-trained language models on downstream tasks has become the de-facto learning paradigm in NLP.
Recent work has proposed a variety of parameter-efficient transfer learning methods that only fine-tune a small number of (extra) parameters to attain strong performance.
We break down the design of state-of-the-art parameter-efficient transfer learning methods and present a unified framework that establishes connections between them.
arXiv Detail & Related papers (2021-10-08T20:22:26Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z) - Objective-Sensitive Principal Component Analysis for High-Dimensional
Inverse Problems [0.0]
We present a novel approach for adaptive, differentiable parameterization of large-scale random fields.
The developed technique is based on principal component analysis (PCA) but modifies a purely data-driven basis of principal components considering objective function behavior.
Three algorithms for optimal parameter decomposition are presented and applied to an objective of 2D synthetic history matching.
arXiv Detail & Related papers (2020-06-02T18:51:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.