Research on Efficiency Analysis of Microservices
- URL: http://arxiv.org/abs/2303.15490v1
- Date: Tue, 21 Mar 2023 02:00:28 GMT
- Title: Research on Efficiency Analysis of Microservices
- Authors: Abel C. H. Chen
- Abstract summary: This study proposes an efficiency analysis framework based on queuing models to analyze the efficiency difference of breaking down traditional large services into n.
It found that breaking down into multiple can effectively improve system efficiency and proved that when the time of the large service is evenly distributed among multiple, the best improvement effect can be achieved.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the maturity of web services, containers, and cloud computing
technologies, large services in traditional systems (e.g. the computation
services of machine learning and artificial intelligence) are gradually being
broken down into many microservices to increase service reusability and
flexibility. Therefore, this study proposes an efficiency analysis framework
based on queuing models to analyze the efficiency difference of breaking down
traditional large services into n microservices. For generalization, this study
considers different service time distributions (e.g. exponential distribution
of service time and fixed service time) and explores the system efficiency in
the worst-case and best-case scenarios through queuing models (i.e. M/M/1
queuing model and M/D/1 queuing model). In each experiment, it was shown that
the total time required for the original large service was higher than that
required for breaking it down into multiple microservices, so breaking it down
into multiple microservices can improve system efficiency. It can also be
observed that in the best-case scenario, the improvement effect becomes more
significant with an increase in arrival rate. However, in the worst-case
scenario, only slight improvement was achieved. This study found that breaking
down into multiple microservices can effectively improve system efficiency and
proved that when the computation time of the large service is evenly
distributed among multiple microservices, the best improvement effect can be
achieved. Therefore, this study's findings can serve as a reference guide for
future development of microservice architecture.
Related papers
- An Infrastructure Cost Optimised Algorithm for Partitioning of Microservices [20.638612359627952]
As migrating applications into the cloud is universally adopted by the software industry, have proven to be the most suitable and widely accepted architecture pattern for applications deployed on distributed cloud.
Their efficacy is enabled by both technical benefits like reliability, fault isolation, scalability and productivity benefits like ease of asset maintenance and clear ownership boundaries.
In some cases, the complexity of migrating an existing application into the architecture becomes overwhelmingly complex and expensive.
arXiv Detail & Related papers (2024-08-13T02:08:59Z) - Digital Twin-assisted Reinforcement Learning for Resource-aware
Microservice Offloading in Edge Computing [12.972771759204264]
We introduce a novel microservice offloading algorithm, DTDRLMO, which leverages deep reinforcement learning (DRL) and digital twin technology.
Specifically, we employ digital twin techniques to predict and adapt to changing edge node loads and network conditions of Collaborative edge computing in real-time.
This approach enables the generation of an efficient offloading plan, selecting the most suitable edge node for each microservice.
arXiv Detail & Related papers (2024-03-13T16:44:36Z) - Migration to Microservices: A Comparative Study of Decomposition
Strategies and Analysis Metrics [0.5076419064097734]
We present a novel clustering method to identify potential in a given monolithic application.
Our approach employs a density-based clustering algorithm considering static analysis, structural, and semantic relationships between classes.
arXiv Detail & Related papers (2024-02-13T14:15:00Z) - TranDRL: A Transformer-Driven Deep Reinforcement Learning Enabled Prescriptive Maintenance Framework [58.474610046294856]
Industrial systems demand reliable predictive maintenance strategies to enhance operational efficiency and reduce downtime.
This paper introduces an integrated framework that leverages the capabilities of the Transformer model-based neural networks and deep reinforcement learning (DRL) algorithms to optimize system maintenance actions.
arXiv Detail & Related papers (2023-09-29T02:27:54Z) - Multiagent Reinforcement Learning with an Attention Mechanism for
Improving Energy Efficiency in LoRa Networks [52.96907334080273]
As the network scale increases, the energy efficiency of LoRa networks decreases sharply due to severe packet collisions.
We propose a transmission parameter allocation algorithm based on multiagent reinforcement learning (MALoRa)
Simulation results demonstrate that MALoRa significantly improves the system EE compared with baseline algorithms.
arXiv Detail & Related papers (2023-09-16T11:37:23Z) - Solving Large-scale Spatial Problems with Convolutional Neural Networks [88.31876586547848]
We employ transfer learning to improve training efficiency for large-scale spatial problems.
We propose that a convolutional neural network (CNN) can be trained on small windows of signals, but evaluated on arbitrarily large signals with little to no performance degradation.
arXiv Detail & Related papers (2023-06-14T01:24:42Z) - Does Microservices Adoption Impact the Development Velocity? A Cohort
Study. A Registered Report [4.866714740906538]
The goal of this study plan is to investigate the effect have on development velocity.
The study compares GitHub projects adopting from the beginning and similar projects using monolithic architectures.
arXiv Detail & Related papers (2023-06-03T07:27:01Z) - Towards Compute-Optimal Transfer Learning [82.88829463290041]
We argue that zero-shot structured pruning of pretrained models allows them to increase compute efficiency with minimal reduction in performance.
Our results show that pruning convolutional filters of pretrained models can lead to more than 20% performance improvement in low computational regimes.
arXiv Detail & Related papers (2023-04-25T21:49:09Z) - MicroRes: Versatile Resilience Profiling in Microservices via Degradation Dissemination Indexing [29.456286275972474]
Microservice resilience, the ability to recover from failures and continue providing reliable and responsive services, is crucial for cloud vendors.
The current practice relies on manually configured specific rules to a certain microservice system, resulting in labor-intensity and flexibility issues.
Our insight is that resilient deployment can effectively prevent the dissemination of degradation from system performance to user-aware metrics, and the latter affects service quality.
arXiv Detail & Related papers (2022-12-25T03:56:42Z) - Large Scale Mask Optimization Via Convolutional Fourier Neural Operator
and Litho-Guided Self Training [54.16367467777526]
We present a Convolutional Neural Operator (CFCF) that can efficiently learn mask tasks.
For the first time, our machine learning-based framework outperforms state-of-the-art numerical mask dataset.
arXiv Detail & Related papers (2022-07-08T16:39:31Z) - FedDUAP: Federated Learning with Dynamic Update and Adaptive Pruning
Using Shared Data on the Server [64.94942635929284]
Federated Learning (FL) suffers from two critical challenges, i.e., limited computational resources and low training efficiency.
We propose a novel FL framework, FedDUAP, to exploit the insensitive data on the server and the decentralized data in edge devices.
By integrating the two original techniques together, our proposed FL model, FedDUAP, significantly outperforms baseline approaches in terms of accuracy (up to 4.8% higher), efficiency (up to 2.8 times faster), and computational cost (up to 61.9% smaller)
arXiv Detail & Related papers (2022-04-25T10:00:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.