Revisiting the Performance of Serverless Computing: An Analysis of
Variance
- URL: http://arxiv.org/abs/2305.04309v1
- Date: Sun, 7 May 2023 15:32:16 GMT
- Title: Revisiting the Performance of Serverless Computing: An Analysis of
Variance
- Authors: Jinfeng Wen, Zhenpeng Chen, Federica Sarro, Xuanzhe Liu
- Abstract summary: Serverless computing allows software engineers to develop applications at the granularity of function (called serverless functions)
Multiple identical runs of the same serverless functions can show different performance due to the highly dynamic underlying environment where these functions are executed.
We investigate 59 related research papers published in top-tier conferences, and observe that only 40.68% of them use multiple runs to quantify the variance of serverless function performance.
- Score: 14.872563076658563
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Serverless computing is an emerging cloud computing paradigm, which allows
software engineers to develop applications at the granularity of function
(called serverless functions). However, multiple identical runs of the same
serverless functions can show different performance (i.e., response latencies)
due to the highly dynamic underlying environment where these functions are
executed. We conduct the first work to study serverless function performance to
raise awareness of this variance among researchers. We investigate 59 related
research papers published in top-tier conferences, and observe that only 40.68%
of them use multiple runs to quantify the variance of serverless function
performance. Then we extract 65 serverless functions used in these papers and
find that the performance of these serverless functions can differ by up to
338.76% (44.15% on average), indicating a large magnitude of the variance.
Furthermore, we find that 61.54% of these functions can have unreliable
performance results at the low number of repetitions that are widely adopted in
the serverless computing literature.
Related papers
- Shabari: Delayed Decision-Making for Faster and Efficient Serverless
Functions [0.30693357740321775]
We introduce Shabari, a resource management framework for serverless systems.
Shabari makes decisions as late as possible to right-size each invocation to meet functions' performance objectives.
For a range of serverless functions and inputs, Shabari reduces SLO violations by 11-73%.
arXiv Detail & Related papers (2024-01-16T22:20:36Z) - SuperFlow: Performance Testing for Serverless Computing [14.872563076658563]
We propose SuperFlow, the first performance testing approach tailored specifically for serverless computing.
SuperFlow provides testing results with 97.22% accuracy, 39.91 percentage points higher than the best currently available technique.
arXiv Detail & Related papers (2023-06-02T15:29:28Z) - Multi-task Bias-Variance Trade-off Through Functional Constraints [102.64082402388192]
Multi-task learning aims to acquire a set of functions that perform well for diverse tasks.
In this paper we draw intuition from the two extreme learning scenarios -- a single function for all tasks, and a task-specific function that ignores the other tasks.
We introduce a constrained learning formulation that enforces domain specific solutions to a central function.
arXiv Detail & Related papers (2022-10-27T16:06:47Z) - Offline Reinforcement Learning with Differentiable Function
Approximation is Provably Efficient [65.08966446962845]
offline reinforcement learning, which aims at optimizing decision-making strategies with historical data, has been extensively applied in real-life applications.
We take a step by considering offline reinforcement learning with differentiable function class approximation (DFA)
Most importantly, we show offline differentiable function approximation is provably efficient by analyzing the pessimistic fitted Q-learning algorithm.
arXiv Detail & Related papers (2022-10-03T07:59:42Z) - Compactness Score: A Fast Filter Method for Unsupervised Feature
Selection [66.84571085643928]
We propose a fast unsupervised feature selection method, named as, Compactness Score (CSUFS) to select desired features.
Our proposed algorithm seems to be more accurate and efficient compared with existing algorithms.
arXiv Detail & Related papers (2022-01-31T13:01:37Z) - DAPPER: Label-Free Performance Estimation after Personalization for
Heterogeneous Mobile Sensing [95.18236298557721]
We present DAPPER (Domain AdaPtation Performance EstimatoR) that estimates the adaptation performance in a target domain with unlabeled target data.
Our evaluation with four real-world sensing datasets compared against six baselines shows that DAPPER outperforms the state-of-the-art baseline by 39.8% in estimation accuracy.
arXiv Detail & Related papers (2021-11-22T08:49:33Z) - Harvesting Idle Resources in Serverless Computing via Reinforcement
Learning [7.346628578439277]
FRM maximizes resource efficiency by dynamically harvesting idle resources from functions over-supplied to functions under-supplied.
FRM monitors each function's resource utilization in real-time, detects over-provisioning and under-provisioning, and applies deep reinforcement learning to harvest idle resources safely.
We have implemented and deployed a FRM prototype in a 13-node Apache OpenWhisk cluster.
arXiv Detail & Related papers (2021-08-28T23:02:56Z) - Neural Network Approximation of Refinable Functions [8.323468006516018]
We show that refinable functions are approximated by the outputs of deep ReLU networks with a fixed width and increasing depth with accuracy exponential.
Our results apply to functions used in the standard construction of wavelets as well as to functions constructed via subdivision algorithms in Computer Aided Geometric Design.
arXiv Detail & Related papers (2021-07-28T06:45:36Z) - UNIPoint: Universally Approximating Point Processes Intensities [125.08205865536577]
We provide a proof that a class of learnable functions can universally approximate any valid intensity function.
We implement UNIPoint, a novel neural point process model, using recurrent neural networks to parameterise sums of basis function upon each event.
arXiv Detail & Related papers (2020-07-28T09:31:56Z) - Dynamic Parameter Allocation in Parameter Servers [74.250687861348]
We propose to integrate dynamic parameter allocation into parameter servers, describe an efficient implementation of such a parameter server called Lapse.
We found that Lapse provides near-linear scaling and can be orders of magnitude faster than existing parameter servers.
arXiv Detail & Related papers (2020-02-03T11:37:54Z) - Approximating Activation Functions [3.8834605840347667]
We use function approximation techniques to develop replacements for hyperbolic tangent and sigmoid functions.
We find safe approximations that yield a 10% to 37% improvement in training times on the CPU.
Our functions also match or considerably out perform the ad-hoc approximations used in Theano and the implementation of Word2Vec.
arXiv Detail & Related papers (2020-01-17T15:25:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.