NURD: Negative-Unlabeled Learning for Online Datacenter Straggler
Prediction
- URL: http://arxiv.org/abs/2203.08339v1
- Date: Wed, 16 Mar 2022 01:15:50 GMT
- Title: NURD: Negative-Unlabeled Learning for Online Datacenter Straggler
Prediction
- Authors: Yi Ding, Avinash Rao, Hyebin Song, Rebecca Willett, Henry Hoffmann
- Abstract summary: A job completes when all its tasks finish, so stragglers are a major impediment to performance.
This paper presents NURD, a novel Negative-Unlabeled learning approach with Reweighting and Distribution-compensation.
We evaluate NURD on two production traces from Google and Alibaba.
- Score: 17.346001585453415
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Datacenters execute large computational jobs, which are composed of smaller
tasks. A job completes when all its tasks finish, so stragglers -- rare, yet
extremely slow tasks -- are a major impediment to datacenter performance.
Accurately predicting stragglers would enable proactive intervention, allowing
datacenter operators to mitigate stragglers before they delay a job. While much
prior work applies machine learning to predict computer system performance,
these approaches rely on complete labels -- i.e., sufficient examples of all
possible behaviors, including straggling and non-straggling -- or strong
assumptions about the underlying latency distributions -- e.g., whether
Gaussian or not. Within a running job, however, none of this information is
available until stragglers have revealed themselves when they have already
delayed the job. To predict stragglers accurately and early without labeled
positive examples or assumptions on latency distributions, this paper presents
NURD, a novel Negative-Unlabeled learning approach with Reweighting and
Distribution-compensation that only trains on negative and unlabeled streaming
data. The key idea is to train a predictor using finished tasks of
non-stragglers to predict latency for unlabeled running tasks, and then
reweight each unlabeled task's prediction based on a weighting function of its
feature space. We evaluate NURD on two production traces from Google and
Alibaba, and find that compared to the best baseline approach, NURD produces
2--11 percentage point increases in the F1 score in terms of prediction
accuracy, and 4.7--8.8 percentage point improvements in job completion time.
Related papers
- Equation Discovery with Bayesian Spike-and-Slab Priors and Efficient Kernels [57.46832672991433]
We propose a novel equation discovery method based on Kernel learning and BAyesian Spike-and-Slab priors (KBASS)
We use kernel regression to estimate the target function, which is flexible, expressive, and more robust to data sparsity and noises.
We develop an expectation-propagation expectation-maximization algorithm for efficient posterior inference and function estimation.
arXiv Detail & Related papers (2023-10-09T03:55:09Z) - Task-Aware Machine Unlearning and Its Application in Load Forecasting [4.00606516946677]
This paper introduces the concept of machine unlearning which is specifically designed to remove the influence of part of the dataset on an already trained forecaster.
A performance-aware algorithm is proposed by evaluating the sensitivity of local model parameter change using influence function and sample re-weighting.
We tested the unlearning algorithms on linear, CNN, andMixer based load forecasters with a realistic load dataset.
arXiv Detail & Related papers (2023-08-28T08:50:12Z) - PePNet: A Periodicity-Perceived Workload Prediction Network Supporting Rare Occurrence of Heavy Workload [11.93843096959306]
workload of cloud servers is highly variable, with occasional heavy workload bursts.
There are two categories of workload prediction methods: statistical methods and neural-network-based ones.
We propose PePNet to improve overall especially heavy workload prediction accuracy.
arXiv Detail & Related papers (2023-07-11T07:56:27Z) - DCLP: Neural Architecture Predictor with Curriculum Contrastive Learning [5.2319020651074215]
We propose a Curricumum-guided Contrastive Learning framework for neural Predictor (DCLP)
Our method simplifies the contrastive task by designing a novel curriculum to enhance the stability of unlabeled training data distribution.
We experimentally demonstrate that DCLP has high accuracy and efficiency compared with existing predictors.
arXiv Detail & Related papers (2023-02-25T08:16:21Z) - Non-Clairvoyant Scheduling with Predictions Revisited [77.86290991564829]
In non-clairvoyant scheduling, the task is to find an online strategy for scheduling jobs with a priori unknown processing requirements.
We revisit this well-studied problem in a recently popular learning-augmented setting that integrates (untrusted) predictions in algorithm design.
We show that these predictions have desired properties, admit a natural error measure as well as algorithms with strong performance guarantees.
arXiv Detail & Related papers (2022-02-21T13:18:11Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Transformers Can Do Bayesian Inference [56.99390658880008]
We present Prior-Data Fitted Networks (PFNs)
PFNs leverage in-context learning in large-scale machine learning techniques to approximate a large set of posteriors.
We demonstrate that PFNs can near-perfectly mimic Gaussian processes and also enable efficient Bayesian inference for intractable problems.
arXiv Detail & Related papers (2021-12-20T13:07:39Z) - CaSP: Class-agnostic Semi-Supervised Pretraining for Detection and
Segmentation [60.28924281991539]
We propose a novel Class-agnostic Semi-supervised Pretraining (CaSP) framework to achieve a more favorable task-specificity balance.
Using 3.6M unlabeled data, we achieve a remarkable performance gain of 4.7% over ImageNet-pretrained baseline on object detection.
arXiv Detail & Related papers (2021-12-09T14:54:59Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z) - Towards optimally abstaining from prediction [22.937799541125607]
A common challenge across all areas of machine learning is that training data is not distributed like test data.
We consider a model where one may abstain from predicting, at a fixed cost.
Our work builds on a recent abstention algorithm of Goldwasser, Kalais, and Montasser ( 2020) for transductive binary classification.
arXiv Detail & Related papers (2021-05-28T21:44:48Z) - Exploring Bayesian Surprise to Prevent Overfitting and to Predict Model
Performance in Non-Intrusive Load Monitoring [25.32973996508579]
Non-Intrusive Load Monitoring (NILM) is a field of research focused on segregating constituent electrical loads in a system based only on their aggregated signal.
We quantify the degree of surprise between the predictive distribution (termed postdictive surprise) and the transitional probabilities (termed transitional surprise)
This work provides clear evidence that a point of diminishing returns of model performance with respect to dataset size exists.
arXiv Detail & Related papers (2020-09-16T15:39:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.