CAMEO: A Causal Transfer Learning Approach for Performance Optimization
of Configurable Computer Systems
- URL: http://arxiv.org/abs/2306.07888v2
- Date: Tue, 3 Oct 2023 12:27:53 GMT
- Title: CAMEO: A Causal Transfer Learning Approach for Performance Optimization
of Configurable Computer Systems
- Authors: Md Shahriar Iqbal, Ziyuan Zhong, Iftakhar Ahmad, Baishakhi Ray, Pooyan
Jamshidi
- Abstract summary: We propose CAMEO, a method that identifies invariant causal predictors under environmental changes.
We demonstrate significant performance improvements over state-of-the-art optimization methods in MLperf deep learning systems, a video analytics pipeline, and a database system.
- Score: 16.75106122540052
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern computer systems are highly configurable, with hundreds of
configuration options that interact, resulting in an enormous configuration
space. As a result, optimizing performance goals (e.g., latency) in such
systems is challenging due to frequent uncertainties in their environments
(e.g., workload fluctuations). Recently, transfer learning has been applied to
address this problem by reusing knowledge from configuration measurements from
the source environments, where it is cheaper to intervene than the target
environment, where any intervention is costly or impossible. Recent empirical
research showed that statistical models can perform poorly when the deployment
environment changes because the behavior of certain variables in the models can
change dramatically from source to target. To address this issue, we propose
CAMEO, a method that identifies invariant causal predictors under environmental
changes, allowing the optimization process to operate in a reduced search
space, leading to faster optimization of system performance. We demonstrate
significant performance improvements over state-of-the-art optimization methods
in MLperf deep learning systems, a video analytics pipeline, and a database
system.
Related papers
- Can LLMs Configure Software Tools [0.76146285961466]
In software engineering, the meticulous configuration of software tools is crucial in ensuring optimal performance within intricate systems.
In this study, we embark on an exploration of leveraging Large-Language Models (LLMs) to streamline the software configuration process.
Our work presents a novel approach that employs LLMs, such as Chat-GPT, to identify starting conditions and narrow down the search space, improving configuration efficiency.
arXiv Detail & Related papers (2023-12-11T05:03:02Z) - Solving Expensive Optimization Problems in Dynamic Environments with Meta-learning [32.41025515064283]
We propose a simple yet effective meta-learning-based optimization framework for solving expensive dynamic optimization problems.
This framework is flexible, allowing any off-the-shelf continuously differentiable surrogate model to be used in a plug-in manner.
Experiments demonstrate the effectiveness of the proposed algorithm framework compared to several state-of-the-art algorithms.
arXiv Detail & Related papers (2023-10-19T07:42:51Z) - Constrained Environment Optimization for Prioritized Multi-Agent
Navigation [11.473177123332281]
This paper aims to consider the environment as a decision variable in a system-level optimization problem.
We propose novel problems of unprioritized and prioritized environment optimization.
We show, through formal proofs, under which conditions the environment can change while guaranteeing completeness.
arXiv Detail & Related papers (2023-05-18T18:55:06Z) - A Data-Driven Evolutionary Transfer Optimization for Expensive Problems
in Dynamic Environments [9.098403098464704]
Data-driven, a.k.a. surrogate-assisted, evolutionary optimization has been recognized as an effective approach for tackling expensive black-box optimization problems.
This paper proposes a simple but effective transfer learning framework to empower data-driven evolutionary optimization to solve dynamic optimization problems.
Experiments on synthetic benchmark test problems and a real-world case study demonstrate the effectiveness of our proposed algorithm.
arXiv Detail & Related papers (2022-11-05T11:19:50Z) - Environment Optimization for Multi-Agent Navigation [11.473177123332281]
The goal of this paper is to consider the environment as a decision variable in a system-level optimization problem.
We show, through formal proofs, under which conditions the environment can change while guaranteeing completeness.
In order to accommodate a broad range of implementation scenarios, we include both online and offline optimization, and both discrete and continuous environment representations.
arXiv Detail & Related papers (2022-09-22T19:22:16Z) - Few-shot Quality-Diversity Optimization [50.337225556491774]
Quality-Diversity (QD) optimization has been shown to be effective tools in dealing with deceptive minima and sparse rewards in Reinforcement Learning.
We show that, given examples from a task distribution, information about the paths taken by optimization in parameter space can be leveraged to build a prior population, which when used to initialize QD methods in unseen environments, allows for few-shot adaptation.
Experiments carried in both sparse and dense reward settings using robotic manipulation and navigation benchmarks show that it considerably reduces the number of generations that are required for QD optimization in these environments.
arXiv Detail & Related papers (2021-09-14T17:12:20Z) - Learning to Continuously Optimize Wireless Resource in a Dynamic
Environment: A Bilevel Optimization Perspective [52.497514255040514]
This work develops a new approach that enables data-driven methods to continuously learn and optimize resource allocation strategies in a dynamic environment.
We propose to build the notion of continual learning into wireless system design, so that the learning model can incrementally adapt to the new episodes.
Our design is based on a novel bilevel optimization formulation which ensures certain fairness" across different data samples.
arXiv Detail & Related papers (2021-05-03T07:23:39Z) - Learning to Continuously Optimize Wireless Resource In Episodically
Dynamic Environment [55.91291559442884]
This work develops a methodology that enables data-driven methods to continuously learn and optimize in a dynamic environment.
We propose to build the notion of continual learning into the modeling process of learning wireless systems.
Our design is based on a novel min-max formulation which ensures certain fairness" across different data samples.
arXiv Detail & Related papers (2020-11-16T08:24:34Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z) - Bilevel Optimization for Differentially Private Optimization in Energy
Systems [53.806512366696275]
This paper studies how to apply differential privacy to constrained optimization problems whose inputs are sensitive.
The paper shows that, under a natural assumption, a bilevel model can be solved efficiently for large-scale nonlinear optimization problems.
arXiv Detail & Related papers (2020-01-26T20:15:28Z) - Optimizing Wireless Systems Using Unsupervised and
Reinforced-Unsupervised Deep Learning [96.01176486957226]
Resource allocation and transceivers in wireless networks are usually designed by solving optimization problems.
In this article, we introduce unsupervised and reinforced-unsupervised learning frameworks for solving both variable and functional optimization problems.
arXiv Detail & Related papers (2020-01-03T11:01:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.