A Multi-faceted Analysis of the Performance Variability of Virtual
Machines
- URL: http://arxiv.org/abs/2309.11959v1
- Date: Thu, 21 Sep 2023 10:25:14 GMT
- Title: A Multi-faceted Analysis of the Performance Variability of Virtual
Machines
- Authors: Luciano Baresi, Tommaso Dolci, Giovanni Quattrocchi, Nicholas Rasi
- Abstract summary: Cloud platforms are known to be affected by performance variability, but a better understanding is still required.
This paper moves in that direction and presents an in-depth, multi-faceted study on the performance variability of cloud platforms.
To the best of our knowledge, this is the widest analysis ever conducted on the topic.
- Score: 0.3481985817302898
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cloud computing and virtualization solutions allow one to rent the virtual
machines (VMs) needed to run applications on a pay-per-use basis, but rented
VMs do not offer any guarantee on their performance. Cloud platforms are known
to be affected by performance variability, but a better understanding is still
required. This paper moves in that direction and presents an in-depth,
multi-faceted study on the performance variability of VMs. Unlike previous
studies, our assessment covers a wide range of factors: 16 VM types from 4
well-known cloud providers, 10 benchmarks, and 28 different metrics. We present
four new contributions. First, we introduce a new benchmark suite (VMBS) that
let researchers and practitioners systematically collect a diverse set of
performance data. Second, we present a new indicator, called Variability
Indicator, that allows for measuring variability in the performance of VMs.
Third, we illustrate an analysis of the collected data across four different
dimensions: resources, isolation, time, and cost. Fourth, we present multiple
predictive models based on Machine Learning that aim to forecast future
performance and detect time patterns. Our experiments provide important
insights on the resource variability of VMs, highlighting differences and
similarities between various cloud providers. To the best of our knowledge,
this is the widest analysis ever conducted on the topic.
Related papers
- Benchmarking Unlearning for Vision Transformers [4.9193859756091145]
This work is the first to benchmarking machine unlearning (MU) algorithm performance in different Vision Transformers (VTs) and at different capacities.<n>It characterizes how VTs training data relative to CNNs, and assesses the impact of different proxies on performance.<n>Overall, this work offers a benchmarking basis, enabling reproducible, fair, and comprehensive comparisons of existing (and future) MU algorithms on VTs.
arXiv Detail & Related papers (2026-02-23T18:33:16Z) - InternSpatial: A Comprehensive Dataset for Spatial Reasoning in Vision-Language Models [59.7084864920244]
InternSpatial is the largest open-source dataset for spatial reasoning in vision-language models (VLMs)<n> InternSpatial comprises 12 million QA pairs spanning both single-view and multi-view settings.<n> InternSpatial-Bench is a corresponding evaluation benchmark designed to assess spatial understanding under diverse instruction formats.
arXiv Detail & Related papers (2025-06-23T08:17:22Z) - From Images to Signals: Are Large Vision Models Useful for Time Series Analysis? [62.58235852194057]
Transformer-based models have gained increasing attention in time series research.<n>As the field moves toward multi-modality, Large Vision Models (LVMs) are emerging as a promising direction.
arXiv Detail & Related papers (2025-05-29T22:05:28Z) - Towards VM Rescheduling Optimization Through Deep Reinforcement Learning [9.4293010682986]
We develop a reinforcement learning system for VM rescheduling, VM2RL, which incorporates a set of customized techniques.<n>Our results show that VM2RL can achieve a performance comparable to the optimal solution but with a running time of seconds.
arXiv Detail & Related papers (2025-05-23T00:30:53Z) - Symmetry-Preserving Architecture for Multi-NUMA Environments (SPANE): A Deep Reinforcement Learning Approach for Dynamic VM Scheduling [28.72083501050024]
We introduce the Dynamic VM Allocation problem in Multi-NUMA PM (DVAMP)
We propose SPANE, a novel deep reinforcement learning approach that exploits the problem's inherent symmetries.
Experiments conducted on the Huawei-East-1 dataset demonstrate that SPANE outperforms existing baselines, reducing average VM wait time by 45%.
arXiv Detail & Related papers (2025-04-21T08:09:40Z) - Space Rotation with Basis Transformation for Training-free Test-Time Adaptation [25.408849667998993]
We propose a training-free feature space rotation with basis transformation for test-time adaptation.
By leveraging the inherent distinctions among classes, we reconstruct the original feature space and map it to a new representation.
Our method outperforms state-of-the-art techniques in terms of both performance and efficiency.
arXiv Detail & Related papers (2025-02-27T10:15:34Z) - Are They the Same? Exploring Visual Correspondence Shortcomings of Multimodal LLMs [42.57007182613631]
We construct a benchmark to fairly benchmark over 30 different MLLMs.
We present CoLVA, a novel contrastive MLLM with object-level contrastive learning and instruction augmentation strategy.
Results show that CoLVA achieves 51.06% overall accuracy (OA) on the MMVM benchmark, surpassing GPT-4o and baseline by 8.41% and 23.58% OA, respectively.
arXiv Detail & Related papers (2025-01-08T18:30:53Z) - Scaling Inference-Time Search with Vision Value Model for Improved Visual Comprehension [95.63899307791665]
Vision Value Model (VisVM) can guide VLM inference-time search to generate responses with better visual comprehension.
In this paper, we present VisVM that can guide VLM inference-time search to generate responses with better visual comprehension.
arXiv Detail & Related papers (2024-12-04T20:35:07Z) - Benchmark Self-Evolving: A Multi-Agent Framework for Dynamic LLM
Evaluation [51.99752147380505]
This paper presents a benchmark self-evolving framework to dynamically evaluate Large Language Models (LLMs)
We utilize a multi-agent system to manipulate the context or question of original instances, reframing new evolving instances with high confidence.
Our framework widens performance discrepancies both between different models and within the same model across various tasks.
arXiv Detail & Related papers (2024-02-18T03:40:06Z) - VMamba: Visual State Space Model [92.83984290020891]
VMamba is a vision backbone that works in linear time complexity.
At the core of VMamba lies a stack of Visual State-Space (VSS) blocks with the 2D Selective Scan (SS2D) module.
arXiv Detail & Related papers (2024-01-18T17:55:39Z) - Online Continual Learning for Robust Indoor Object Recognition [24.316047317028143]
Vision systems mounted on home robots need to interact with unseen classes in changing environments.
We propose RobOCLe, which constructs an enriched feature space computing high order statistical moments.
We show that different moments allow RobOCLe to capture different properties of deformations, providing higher robustness with no decrease of inference speed.
arXiv Detail & Related papers (2023-07-19T08:32:59Z) - Unified Open-Vocabulary Dense Visual Prediction [51.03014432235629]
Open-vocabulary (OV) dense visual prediction has attracted increasing research attention.
Most of existing approaches are task-specific and individually tackle each task.
We propose a Unified Open-Vocabulary Network (UOVN) to jointly address four common dense prediction tasks.
arXiv Detail & Related papers (2023-07-17T04:39:18Z) - Measuring Progress in Fine-grained Vision-and-Language Understanding [23.377634283746698]
We investigate four competitive vision-and-language models on fine-grained benchmarks.
We find that X-VLM consistently outperforms other baselines.
We highlight the importance of both novel losses and rich data sources for learning fine-grained skills.
arXiv Detail & Related papers (2023-05-12T15:34:20Z) - An Empirical Study of End-to-End Video-Language Transformers with Masked
Visual Modeling [152.75131627307567]
Masked visual modeling (MVM) has been recently proven effective for visual pre-training.
We systematically examine the potential of MVM in the context of VidL learning.
We show VIOLETv2 pre-trained with MVM achieves notable improvements on 13 VidL benchmarks.
arXiv Detail & Related papers (2022-09-04T06:30:32Z) - Partitioned Variational Inference: A Framework for Probabilistic
Federated Learning [45.9225420256808]
We introduce partitioned variational inference (PVI), a framework for performing VI in the federated setting.
We develop new supporting theory for PVI, demonstrating a number of properties that make it an attractive choice for practitioners.
arXiv Detail & Related papers (2022-02-24T18:15:30Z) - VMAgent: Scheduling Simulator for Reinforcement Learning [44.026076801936874]
A novel simulator called VMAgent is introduced to help RL researchers better explore new methods.
VMAgent is inspired by practical virtual machine (VM) scheduling tasks.
From the VM scheduling perspective, VMAgent also helps to explore better learning-based scheduling solutions.
arXiv Detail & Related papers (2021-12-09T09:18:38Z) - Comprehensive Review On Twin Support Vector Machines [0.0]
Twin support vector machine (TSVM) and twin support vector regression (TSVR) are newly emerging efficient machine learning techniques.
TSVM is based upon the idea to identify two nonparallel hyperplanes which classify the data points to their respective classes.
TSVR is formulated on the lines of TSVM and requires to solve two SVM kind problems.
arXiv Detail & Related papers (2021-05-01T19:48:45Z) - DiVA: Diverse Visual Feature Aggregation for Deep Metric Learning [83.48587570246231]
Visual Similarity plays an important role in many computer vision applications.
Deep metric learning (DML) is a powerful framework for learning such similarities.
We propose and study multiple complementary learning tasks, targeting conceptually different data relationships.
We learn a single model to aggregate their training signals, resulting in strong generalization and state-of-the-art performance.
arXiv Detail & Related papers (2020-04-28T12:26:50Z) - Stance Detection Benchmark: How Robust Is Your Stance Detection? [65.91772010586605]
Stance Detection (StD) aims to detect an author's stance towards a certain topic or claim.
We introduce a StD benchmark that learns from ten StD datasets of various domains in a multi-dataset learning setting.
Within this benchmark setup, we are able to present new state-of-the-art results on five of the datasets.
arXiv Detail & Related papers (2020-01-06T13:37:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.