How Do Microservice API Patterns Impact Understandability? A Controlled
Experiment
- URL: http://arxiv.org/abs/2402.13696v1
- Date: Wed, 21 Feb 2024 10:54:47 GMT
- Title: How Do Microservice API Patterns Impact Understandability? A Controlled
Experiment
- Authors: Justus Bogner, Pawel W\'ojcik, Olaf Zimmermann
- Abstract summary: We conducted a controlled experiment with 6 microservice patterns to evaluate their impact on understandability with 65 diverse participants.
For five of the six patterns, we identified a significant positive impact on understandability, i.e., participants answered faster and / or more correctly for "P"
The correlations between performance and demographics seem to suggest that certain patterns may introduce additional complexity.
- Score: 4.26177272224368
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Microservices expose their functionality via remote Application Programming
Interfaces (APIs), e.g., based on HTTP or asynchronous messaging technology. To
solve recurring problems in this design space, Microservice API Patterns (MAPs)
have emerged to capture the collective experience of the API design community.
At present, there is a lack of empirical evidence for the effectiveness of
these patterns, e.g., how they impact understandability and API usability. We
therefore conducted a controlled experiment with 6 microservice patterns to
evaluate their impact on understandability with 65 diverse participants.
Additionally, we wanted to study how demographics like years of professional
experience or experience with MAPs influence the effects of the patterns. Per
pattern, we constructed two API examples, each in a pattern version "P" and a
functionally equivalent non-pattern version "N" (24 in total). Based on a
crossover design, participants had to answer comprehension questions, while we
measured the time. For five of the six patterns, we identified a significant
positive impact on understandability, i.e., participants answered faster and /
or more correctly for "P". However, effect sizes were mostly small, with one
pattern showing a medium effect. The correlations between performance and
demographics seem to suggest that certain patterns may introduce additional
complexity; people experienced with MAPs will profit more from their effects.
This has important implications for training and education around MAPs and
other patterns.
Related papers
- Concept Influence: Leveraging Interpretability to Improve Performance and Efficiency in Training Data Attribution [11.387100835483672]
Training Data Attribution (TDA) methods identify which training data drive specific behaviors, particularly unintended ones.<n>Existing approaches like influence functions are both computationally expensive and attribute based on single test examples.<n>We leverage interpretable structures within the model during the attribution.<n>We show that incorporating interpretable structure within traditional TDA pipelines can enable more scalable, explainable, and better control of model behavior through data.
arXiv Detail & Related papers (2026-02-16T16:02:09Z) - Structured Prompts, Better Outcomes? Exploring the Effects of a Structured Interface with ChatGPT in a Graduate Robotics Course [0.0]
This study evaluates the impact of a structured GPT platform designed to promote 'good' prompting behavior.<n>We analyzed student perception (pre-post surveys), prompting behavior (logs), performance (task scores), and learning.
arXiv Detail & Related papers (2025-07-10T13:50:07Z) - RouterKT: Mixture-of-Experts for Knowledge Tracing [1.983472984641239]
Knowledge Tracing (KT) is a fundamental task in Intelligent Tutoring Systems (ITS)
We propose RouterKT, a novel Mixture-of-Experts architecture designed to capture heterogeneous learning patterns.
We show that RouterKT exhibits significant flexibility and improves the performance of various KT backbone models.
arXiv Detail & Related papers (2025-04-11T21:42:08Z) - Do Influence Functions Work on Large Language Models? [10.463762448166714]
Influence functions aim to quantify the impact of individual training data points on a model's predictions.
We evaluate influence functions across multiple tasks and find that they consistently perform poorly in most settings.
arXiv Detail & Related papers (2024-09-30T06:50:18Z) - Sensitivity, Performance, Robustness: Deconstructing the Effect of
Sociodemographic Prompting [64.80538055623842]
sociodemographic prompting is a technique that steers the output of prompt-based models towards answers that humans with specific sociodemographic profiles would give.
We show that sociodemographic information affects model predictions and can be beneficial for improving zero-shot learning in subjective NLP tasks.
arXiv Detail & Related papers (2023-09-13T15:42:06Z) - Do RESTful API Design Rules Have an Impact on the Understandability of
Web APIs? A Web-Based Experiment with API Descriptions [4.26177272224368]
We conducted a controlled Web-based hybrid experiment with 105 participants.
We studied 12 design rules using API snippets in two versions: one that adhered to a "rule" and one that was a "violation" of this rule.
For 11 of the 12 rules, we found that "violation" performed significantly worse than "rule" for the comprehension tasks.
arXiv Detail & Related papers (2023-05-12T09:48:23Z) - If Influence Functions are the Answer, Then What is the Question? [7.873458431535409]
Influence functions efficiently estimate the effect of removing a single training data point on a model's learned parameters.
While influence estimates align well with leave-one-out retraining for linear models, recent works have shown this alignment is often poor in neural networks.
arXiv Detail & Related papers (2022-09-12T16:17:43Z) - Supervised Contrastive Learning for Affect Modelling [2.570570340104555]
We introduce three different supervised contrastive learning approaches for training representations that consider affect information.
Results demonstrate the representation capacity of contrastive learning and its efficiency in boosting the accuracy of affect models.
arXiv Detail & Related papers (2022-08-25T17:40:19Z) - Coarse-to-Fine Knowledge-Enhanced Multi-Interest Learning Framework for
Multi-Behavior Recommendation [52.89816309759537]
Multi-types of behaviors (e.g., clicking, adding to cart, purchasing, etc.) widely exist in most real-world recommendation scenarios.
The state-of-the-art multi-behavior models learn behavior dependencies indistinguishably with all historical interactions as input.
We propose a novel Coarse-to-fine Knowledge-enhanced Multi-interest Learning framework to learn shared and behavior-specific interests for different behaviors.
arXiv Detail & Related papers (2022-08-03T05:28:14Z) - Discovering Representative Attribute-stars via Minimum Description
Length [6.1237884900051975]
We propose a parameter-free algorithm named CSPM which identifies star-shaped patterns that indicate strong correlations among attributes.
CSPM successfully boosts the accuracy of graph attribute completion models by up to 30.68% and uncovers important patterns in telecommunication alarm data.
arXiv Detail & Related papers (2022-04-27T05:23:07Z) - AES Systems Are Both Overstable And Oversensitive: Explaining Why And
Proposing Defenses [66.49753193098356]
We investigate the reason behind the surprising adversarial brittleness of scoring models.
Our results indicate that autoscoring models, despite getting trained as "end-to-end" models, behave like bag-of-words models.
We propose detection-based protection models that can detect oversensitivity and overstability causing samples with high accuracies.
arXiv Detail & Related papers (2021-09-24T03:49:38Z) - FastIF: Scalable Influence Functions for Efficient Model Interpretation
and Debugging [112.19994766375231]
Influence functions approximate the 'influences' of training data-points for test predictions.
We present FastIF, a set of simple modifications to influence functions that significantly improves their run-time.
Our experiments demonstrate the potential of influence functions in model interpretation and correcting model errors.
arXiv Detail & Related papers (2020-12-31T18:02:34Z) - Influence Functions in Deep Learning Are Fragile [52.31375893260445]
influence functions approximate the effect of samples in test-time predictions.
influence estimates are fairly accurate for shallow networks.
Hessian regularization is important to get highquality influence estimates.
arXiv Detail & Related papers (2020-06-25T18:25:59Z) - Explaining Black Box Predictions and Unveiling Data Artifacts through
Influence Functions [55.660255727031725]
Influence functions explain the decisions of a model by identifying influential training examples.
We conduct a comparison between influence functions and common word-saliency methods on representative tasks.
We develop a new measure based on influence functions that can reveal artifacts in training data.
arXiv Detail & Related papers (2020-05-14T00:45:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.