A Systematic Mapping Study and Practitioner Insights on the Use of
Software Engineering Practices to Develop MVPs
- URL: http://arxiv.org/abs/2305.08299v1
- Date: Mon, 15 May 2023 02:00:47 GMT
- Title: A Systematic Mapping Study and Practitioner Insights on the Use of
Software Engineering Practices to Develop MVPs
- Authors: Silvio Alonso, Marcos Kalinowski, Bruna Ferreira, Simone D. J.
Barbosa, Helio Lopes
- Abstract summary: We identified 33 papers published between 2013 and 2020 and observed some trends related to MVP ideation and evaluation practices.
There is an emphasis on end-user validations based on practices such as usability tests, A/B testing, and usage data analysis.
There is still limited research related to MVP technical feasibility assessment and effort estimation.
- Score: 1.6432083797787214
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: [Background] The MVP concept has influenced the way in which development
teams apply Software Engineering practices. However, the overall understanding
of this influence of MVPs on SE practices is still poor. [Objective] Our goal
is to characterize the publication landscape on practices that have been used
in the context of software MVPs and to gather practitioner insights on the
identified practices. [Method] We conducted a systematic mapping study and
discussed its results in two focus groups sessions involving twelve industry
practitioners that extensively use MVPs in their projects to capture their
perceptions on the findings of the mapping study. [Results] We identified 33
papers published between 2013 and 2020 and observed some trends related to MVP
ideation and evaluation practices. For instance, regarding ideation, we found
six different approaches and mainly informal end-user involvement practices.
Regarding evaluation, there is an emphasis on end-user validations based on
practices such as usability tests, A/B testing, and usage data analysis.
However, there is still limited research related to MVP technical feasibility
assessment and effort estimation. Practitioners of the focus group sessions
reinforced the confidence in our results regarding ideation and evaluation
practices, being aware of most of the identified practices. They also reported
how they deal with the technical feasibility assessments and effort estimation
in practice. [Conclusion] Our analysis suggests that there are opportunities
for solution proposals and evaluation studies to address literature gaps
concerning technical feasibility assessment and effort estimation. Overall,
more effort needs to be invested into empirically evaluating the existing
MVP-related practices.
Related papers
- NLP and Education: using semantic similarity to evaluate filled gaps in a large-scale Cloze test in the classroom [0.0]
Using data from Cloze tests administered to students in Brazil, WE models for Brazilian Portuguese (PT-BR) were employed to measure semantic similarity.
A comparative analysis between the WE models' scores and the judges' evaluations revealed that GloVe was the most effective model.
arXiv Detail & Related papers (2024-11-02T15:22:26Z) - A Tutorial on the Design, Experimentation and Application of Metaheuristic Algorithms to Real-World Optimization Problems [16.890440704820367]
In spite of decades of historical advancements on the design and use of metaheuristics, large difficulties still remain in regards to the understandability, algorithmic design uprightness, and performance verifiability of new technical achievements.
This work aims at providing the audience with a proposal of good practices which should be embraced when conducting studies about metaheuristics methods used for optimization.
arXiv Detail & Related papers (2024-10-04T07:41:23Z) - A Benchmark for Fairness-Aware Graph Learning [58.515305543487386]
We present an extensive benchmark on ten representative fairness-aware graph learning methods.
Our in-depth analysis reveals key insights into the strengths and limitations of existing methods.
arXiv Detail & Related papers (2024-07-16T18:43:43Z) - Towards Coarse-to-Fine Evaluation of Inference Efficiency for Large Language Models [95.96734086126469]
Large language models (LLMs) can serve as the assistant to help users accomplish their jobs, and also support the development of advanced applications.
For the wide application of LLMs, the inference efficiency is an essential concern, which has been widely studied in existing work.
We perform a detailed coarse-to-fine analysis of the inference performance of various code libraries.
arXiv Detail & Related papers (2024-04-17T15:57:50Z) - Evaluation in Neural Style Transfer: A Review [0.7614628596146599]
We provide an in-depth analysis of existing evaluation techniques, identify the inconsistencies and limitations of current evaluation methods, and give recommendations for standardized evaluation practices.
We believe that the development of a robust evaluation framework will not only enable more meaningful and fairer comparisons but will also enhance the comprehension and interpretation of research findings in the field.
arXiv Detail & Related papers (2024-01-30T15:45:30Z) - A Matter of Annotation: An Empirical Study on In Situ and Self-Recall Activity Annotations from Wearable Sensors [56.554277096170246]
We present an empirical study that evaluates and contrasts four commonly employed annotation methods in user studies focused on in-the-wild data collection.
For both the user-driven, in situ annotations, where participants annotate their activities during the actual recording process, and the recall methods, where participants retrospectively annotate their data at the end of each day, the participants had the flexibility to select their own set of activity classes and corresponding labels.
arXiv Detail & Related papers (2023-05-15T16:02:56Z) - Efficient Real-world Testing of Causal Decision Making via Bayesian
Experimental Design for Contextual Optimisation [12.37745209793872]
We introduce a model-agnostic framework for gathering data to evaluate and improve contextual decision making.
Our method is used for the data-efficient evaluation of the regret of past treatment assignments.
arXiv Detail & Related papers (2022-07-12T01:20:11Z) - Benchopt: Reproducible, efficient and collaborative optimization
benchmarks [67.29240500171532]
Benchopt is a framework to automate, reproduce and publish optimization benchmarks in machine learning.
Benchopt simplifies benchmarking for the community by providing an off-the-shelf tool for running, sharing and extending experiments.
arXiv Detail & Related papers (2022-06-27T16:19:24Z) - Evaluating the Predictive Performance of Positive-Unlabelled
Classifiers: a brief critical review and practical recommendations for
improvement [77.34726150561087]
Positive-Unlabelled (PU) learning is a growing area of machine learning.
This paper critically reviews the main PU learning evaluation approaches and the choice of predictive accuracy measures in 51 articles proposing PU classifiers.
arXiv Detail & Related papers (2022-06-06T08:31:49Z) - Eliciting Best Practices for Collaboration with Computational Notebooks [10.190501703364234]
We elicit a catalog of best practices for collaborative data science with computational notebooks.
We conduct interviews with professional data scientists to assess their awareness of these best practices.
Findings reveal that experts are mostly aware of the best practices and tend to adopt them in their daily work.
arXiv Detail & Related papers (2022-02-15T07:39:37Z) - A Field Guide to Federated Optimization [161.3779046812383]
Federated learning and analytics are a distributed approach for collaboratively learning models (or statistics) from decentralized data.
This paper provides recommendations and guidelines on formulating, designing, evaluating and analyzing federated optimization algorithms.
arXiv Detail & Related papers (2021-07-14T18:09:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.