Availability Analysis of Redundant and Replicated Cloud Services with
Bayesian Networks
- URL: http://arxiv.org/abs/2306.13334v1
- Date: Fri, 23 Jun 2023 07:29:46 GMT
- Title: Availability Analysis of Redundant and Replicated Cloud Services with
Bayesian Networks
- Authors: Otto Bibartiu (1), Frank D\"urr (1), Kurt Rothermel (1), Beate
Ottenw\"alder (2), Andreas Grau (2) ((1) University of Stuttgart, (2) Robert
Bosch GmbH)
- Abstract summary: This paper introduces a high-level modeling formalism to build such a Bayesian network automatically.
Performance evaluations demonstrate the feasibility of the presented Bayesian network approach to assess the availability of large-scale redundant and replicated services.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to the growing complexity of modern data centers, failures are not
uncommon any more. Therefore, fault tolerance mechanisms play a vital role in
fulfilling the availability requirements. Multiple availability models have
been proposed to assess compute systems, among which Bayesian network models
have gained popularity in industry and research due to its powerful modeling
formalism. In particular, this work focuses on assessing the availability of
redundant and replicated cloud computing services with Bayesian networks. So
far, research on availability has only focused on modeling either
infrastructure or communication failures in Bayesian networks, but have not
considered both simultaneously. This work addresses practical modeling
challenges of assessing the availability of large-scale redundant and
replicated services with Bayesian networks, including cascading and
common-cause failures from the surrounding infrastructure and communication
network. In order to ease the modeling task, this paper introduces a high-level
modeling formalism to build such a Bayesian network automatically. Performance
evaluations demonstrate the feasibility of the presented Bayesian network
approach to assess the availability of large-scale redundant and replicated
services. This model is not only applicable in the domain of cloud computing it
can also be applied for general cases of local and geo-distributed systems.
Related papers
- A method for quantifying the generalization capabilities of generative models for solving Ising models [5.699467840225041]
We use a Hamming distance regularizer to quantify the generalization capabilities of various network architectures combined with VAN.
We conduct numerical experiments on several network architectures combined with VAN, including feed-forward neural networks, recurrent neural networks, and graph neural networks.
Our method is of great significance for assisting in the Neural Architecture Search field of searching for the optimal network architectures when solving large-scale Ising models.
arXiv Detail & Related papers (2024-05-06T12:58:48Z) - Data-driven Energy Efficiency Modelling in Large-scale Networks: An Expert Knowledge and ML-based Approach [8.326834499339107]
This paper introduces the simulated reality of communication networks (SRCON) framework.
It harnesses live network data and employs a blend of machine learning (ML)- and expert-based models.
Results show significant gains over a state-of-the art method used by a operator for network energy efficiency modeling.
arXiv Detail & Related papers (2023-12-31T10:03:08Z) - Efficient and Accurate Hyperspectral Image Demosaicing with Neural Network Architectures [3.386560551295746]
This study investigates the effectiveness of neural network architectures in hyperspectral image demosaicing.
We introduce a range of network models and modifications, and compare them with classical methods and existing reference network approaches.
Results indicate that our networks outperform or match reference models in both datasets demonstrating exceptional performance.
arXiv Detail & Related papers (2023-12-21T08:02:49Z) - Visual Prompting Upgrades Neural Network Sparsification: A Data-Model
Perspective [67.25782152459851]
We introduce a novel data-model co-design perspective: to promote superior weight sparsity.
Specifically, customized Visual Prompts are mounted to upgrade neural Network sparsification in our proposed VPNs framework.
arXiv Detail & Related papers (2023-12-03T13:50:24Z) - Residual Multi-Fidelity Neural Network Computing [0.0]
We present a residual multi-fidelity computational framework that formulates the correlation between models as a residual function.
We show that dramatic savings in computational cost may be achieved when the output predictions are desired to be accurate within small tolerances.
arXiv Detail & Related papers (2023-10-05T14:43:16Z) - Generalization and Estimation Error Bounds for Model-based Neural
Networks [78.88759757988761]
We show that the generalization abilities of model-based networks for sparse recovery outperform those of regular ReLU networks.
We derive practical design rules that allow to construct model-based networks with guaranteed high generalization.
arXiv Detail & Related papers (2023-04-19T16:39:44Z) - Leveraging the structure of dynamical systems for data-driven modeling [111.45324708884813]
We consider the impact of the training set and its structure on the quality of the long-term prediction.
We show how an informed design of the training set, based on invariants of the system and the structure of the underlying attractor, significantly improves the resulting models.
arXiv Detail & Related papers (2021-12-15T20:09:20Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - Provably Training Neural Network Classifiers under Fairness Constraints [70.64045590577318]
We show that overparametrized neural networks could meet the constraints.
Key ingredient of building a fair neural network classifier is establishing no-regret analysis for neural networks.
arXiv Detail & Related papers (2020-12-30T18:46:50Z) - Structural Causal Models Are (Solvable by) Credal Networks [70.45873402967297]
Causal inferences can be obtained by standard algorithms for the updating of credal nets.
This contribution should be regarded as a systematic approach to represent structural causal models by credal networks.
Experiments show that approximate algorithms for credal networks can immediately be used to do causal inference in real-size problems.
arXiv Detail & Related papers (2020-08-02T11:19:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.