Visualization and Optimization Techniques for High Dimensional Parameter
Spaces
- URL: http://arxiv.org/abs/2204.13812v1
- Date: Thu, 28 Apr 2022 23:01:15 GMT
- Title: Visualization and Optimization Techniques for High Dimensional Parameter
Spaces
- Authors: Anjul Tyagi
- Abstract summary: We propose a novel approach to create an auto-tuning framework for storage systems optimization combining both direct optimization techniques and visual analytics research.
Our system was developed in tight collaboration with a group of systems performance researchers and its final effectiveness was evaluated with expert interviews, a comparative user study, and two case studies.
- Score: 4.111899441919165
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: High dimensional parameter space optimization is crucial in many
applications. The parameters affecting this performance can be both numerical
and categorical in their type. The existing techniques of black-box
optimization and visual analytics are good in dealing with numerical parameters
but analyzing categorical variables in context of the numerical variables are
not well studied. Hence, we propose a novel approach, to create an auto-tuning
framework for storage systems optimization combining both direct optimization
techniques and visual analytics research. While the optimization algorithm will
be the core of the system, visual analytics will provide a guideline with the
help of an external agent (expert) to provide crucial hints to narrow down the
large search space for the optimization engine. As part of the initial step
towards creating an auto-tuning engine for storage systems optimization, we
created an Interactive Configuration Explorer \textit{ICE}, which directly
addresses the need of analysts to learn how the dependent numerical variable is
affected by the parameter settings given multiple optimization objectives. No
information is lost as ICE shows the complete distribution and statistics of
the dependent variable in context with each categorical variable. Analysts can
interactively filter the variables to optimize for certain goals such as
achieving a system with maximum performance, low variance, etc. Our system was
developed in tight collaboration with a group of systems performance
researchers and its final effectiveness was evaluated with expert interviews, a
comparative user study, and two case studies. We also discuss our research plan
for creating an efficient auto-tuning framework combining black-box
optimization and visual analytics for storage systems performance optimization.
Related papers
- Adaptive Preference Scaling for Reinforcement Learning with Human Feedback [103.36048042664768]
Reinforcement learning from human feedback (RLHF) is a prevalent approach to align AI systems with human values.
We propose a novel adaptive preference loss, underpinned by distributionally robust optimization (DRO)
Our method is versatile and can be readily adapted to various preference optimization frameworks.
arXiv Detail & Related papers (2024-06-04T20:33:22Z) - Unleashing the Potential of Large Language Models as Prompt Optimizers: An Analogical Analysis with Gradient-based Model Optimizers [108.72225067368592]
We propose a novel perspective to investigate the design of large language models (LLMs)-based prompts.
We identify two pivotal factors in model parameter learning: update direction and update method.
In particular, we borrow the theoretical framework and learning methods from gradient-based optimization to design improved strategies.
arXiv Detail & Related papers (2024-02-27T15:05:32Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - CURE: Simulation-Augmented Auto-Tuning in Robotics [15.943773140929856]
This paper proposes CURE -- a method that identifies causally relevant configuration options.
CURE abstracts the causal relationships between various configuration options and robot performance objectives.
We demonstrate the effectiveness and transferability of CURE by conducting experiments in both physical robots and simulation.
arXiv Detail & Related papers (2024-02-08T04:27:14Z) - A Survey on Multi-Objective based Parameter Optimization for Deep
Learning [1.3223682837381137]
We focus on exploring the effectiveness of multi-objective optimization strategies for parameter optimization in conjunction with deep neural networks.
The two methods are combined to provide valuable insights into the generation of predictions and analysis in multiple applications.
arXiv Detail & Related papers (2023-05-17T07:48:54Z) - Agent-based Collaborative Random Search for Hyper-parameter Tuning and
Global Function Optimization [0.0]
This paper proposes an agent-based collaborative technique for finding near-optimal values for any arbitrary set of hyper- parameters in a machine learning model.
The behavior of the presented model, specifically against the changes in its design parameters, is investigated in both machine learning and global function optimization applications.
arXiv Detail & Related papers (2023-03-03T21:10:17Z) - Scalable Bayesian optimization with high-dimensional outputs using
randomized prior networks [3.0468934705223774]
We propose a deep learning framework for BO and sequential decision making based on bootstrapped ensembles of neural architectures with randomized priors.
We show that the proposed framework can approximate functional relationships between design variables and quantities of interest, even in cases where the latter take values in high-dimensional vector spaces or even infinite-dimensional function spaces.
We test the proposed framework against state-of-the-art methods for BO and demonstrate superior performance across several challenging tasks with high-dimensional outputs.
arXiv Detail & Related papers (2023-02-14T18:55:21Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Efficient Non-Parametric Optimizer Search for Diverse Tasks [93.64739408827604]
We present the first efficient scalable and general framework that can directly search on the tasks of interest.
Inspired by the innate tree structure of the underlying math expressions, we re-arrange the spaces into a super-tree.
We adopt an adaptation of the Monte Carlo method to tree search, equipped with rejection sampling and equivalent- form detection.
arXiv Detail & Related papers (2022-09-27T17:51:31Z) - Variable Functioning and Its Application to Large Scale Steel Frame
Design Optimization [15.86197261674868]
A concept-based approach called variable functioning Fx is introduced to reduce the optimization variables and narrow down the search space.
By using problem structure analysis technique and engineering expert knowledge, the $Fx$ method is used to enhance the steel frame design optimization process.
arXiv Detail & Related papers (2022-05-15T12:43:25Z) - Bilevel Optimization: Convergence Analysis and Enhanced Design [63.64636047748605]
Bilevel optimization is a tool for many machine learning problems.
We propose a novel stoc-efficientgradient estimator named stoc-BiO.
arXiv Detail & Related papers (2020-10-15T18:09:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.