A Comparative Study on the Impact of Test-Driven Development (TDD) and Behavior-Driven Development (BDD) on Enterprise Software Delivery Effectiveness
- URL: http://arxiv.org/abs/2411.04141v1
- Date: Tue, 05 Nov 2024 06:47:11 GMT
- Title: A Comparative Study on the Impact of Test-Driven Development (TDD) and Behavior-Driven Development (BDD) on Enterprise Software Delivery Effectiveness
- Authors: Jun Cui,
- Abstract summary: This paper compares the impact of Test-Driven Development (TDD) and Behavior-Driven Development (BDD) on software delivery effectiveness within enterprise environments.
The findings reveal distinct effects of each model on delivery speed, software quality, and team collaboration.
- Score: 0.4532517021515834
- License:
- Abstract: This paper compares the impact of Test-Driven Development (TDD) and Behavior-Driven Development (BDD) on software delivery effectiveness within enterprise environments. Using a qualitative research design, data were collected through in-depth interviews with developers and project managers from enterprises adopting TDD or BDD. Moreover, the findings reveal distinct effects of each model on delivery speed, software quality, and team collaboration. Specifically, TDD emphasizes early testing and iterative development, leading to enhanced code quality and fewer defects, while BDD improves cross-functional communication by focusing on behavior specifications that involve stakeholders directly. However, TDD may create a higher initial time investment, and BDD might encounter challenges in requirement clarity. These differences highlight gaps in understanding how each model aligns with varying project types and stakeholder needs, which can guide enterprises in selecting the most suitable model for their unique requirements. The study contributes to the literature by providing insights into the practical application and challenges of TDD and BDD, suggesting future research on their long-term impacts in diverse settings.
Related papers
- Interactive Agents to Overcome Ambiguity in Software Engineering [61.40183840499932]
AI agents are increasingly being deployed to automate tasks, often based on ambiguous and underspecified user instructions.
Making unwarranted assumptions and failing to ask clarifying questions can lead to suboptimal outcomes.
We study the ability of LLM agents to handle ambiguous instructions in interactive code generation settings by evaluating proprietary and open-weight models on their performance.
arXiv Detail & Related papers (2025-02-18T17:12:26Z) - TDDBench: A Benchmark for Training data detection [42.49625153675721]
Training Data Detection (TDD) is a task aimed at determining whether a specific data instance is used to train a machine learning model.
There is no comprehensive benchmark to thoroughly evaluate the effectiveness of TDD methods.
We benchmark 21 different TDD methods across four detection paradigms and evaluate their performance from five perspectives.
arXiv Detail & Related papers (2024-11-05T05:48:48Z) - Dataset Distillation from First Principles: Integrating Core Information Extraction and Purposeful Learning [10.116674195405126]
We argue that a precise characterization of the underlying optimization problem must specify the inference task associated with the application of interest.
Our formalization reveals novel applications of DD across different modeling environments.
We present numerical results for two case studies important in contemporary settings.
arXiv Detail & Related papers (2024-09-02T18:11:15Z) - Identifying Technical Debt and Its Types Across Diverse Software Projects Issues [4.6173290119212265]
Technical Debt (TD) identification in software projects issues is crucial for maintaining code quality, reducing long-term maintenance costs, and improving overall project health.
This study advances TD classification using transformer-based models, addressing the critical need for accurate and efficient TD identification in large-scale software development.
arXiv Detail & Related papers (2024-08-17T07:46:54Z) - Data-Juicer Sandbox: A Feedback-Driven Suite for Multimodal Data-Model Co-development [67.55944651679864]
We present a new sandbox suite tailored for integrated data-model co-development.
This sandbox provides a feedback-driven experimental platform, enabling cost-effective and guided refinement of both data and models.
arXiv Detail & Related papers (2024-07-16T14:40:07Z) - PairCFR: Enhancing Model Training on Paired Counterfactually Augmented Data through Contrastive Learning [49.60634126342945]
Counterfactually Augmented Data (CAD) involves creating new data samples by applying minimal yet sufficient modifications to flip the label of existing data samples to other classes.
Recent research reveals that training with CAD may lead models to overly focus on modified features while ignoring other important contextual information.
We employ contrastive learning to promote global feature alignment in addition to learning counterfactual clues.
arXiv Detail & Related papers (2024-06-09T07:29:55Z) - Domain-Driven Design in Software Development: A Systematic Literature
Review on Implementation, Challenges, and Effectiveness [0.18726646412385334]
Domain-Driven Design (DDD) addresses software challenges, gaining attention for academia, reimplementation, and adoption.
This Systematic Literature Review (SLR) analyzes DDD research in software development to assess its effectiveness in solving architecture problems.
arXiv Detail & Related papers (2023-10-03T09:22:53Z) - Behaviour Driven Development: A Systematic Mapping Study [2.320648715016107]
Behaviour Driven Development (BDD) uses scenarios written in semi-structured natural language to express software requirements.
There is a lack of secondary studies in peer-reviewed scientific literature.
arXiv Detail & Related papers (2023-05-09T15:56:02Z) - CORE: A Retrieve-then-Edit Framework for Counterfactual Data Generation [91.16551253297588]
COunterfactual Generation via Retrieval and Editing (CORE) is a retrieval-augmented generation framework for creating diverse counterfactual perturbations for training.
CORE first performs a dense retrieval over a task-related unlabeled text corpus using a learned bi-encoder.
CORE then incorporates these into prompts to a large language model with few-shot learning capabilities, for counterfactual editing.
arXiv Detail & Related papers (2022-10-10T17:45:38Z) - Analyzing Dynamic Adversarial Training Data in the Limit [50.00850852546616]
Dynamic adversarial data collection (DADC) holds promise as an approach for generating such diverse training sets.
We present the first study of longer-term DADC, where we collect 20 rounds of NLI examples for a small set of premise paragraphs.
Models trained on DADC examples make 26% fewer errors on our expert-curated test set compared to models trained on non-adversarial data.
arXiv Detail & Related papers (2021-10-16T08:48:52Z) - Quantitatively Assessing the Benefits of Model-driven Development in
Agent-based Modeling and Simulation [80.49040344355431]
This paper compares the use of MDD and ABMS platforms in terms of effort and developer mistakes.
The obtained results show that MDD4ABMS requires less effort to develop simulations with similar (sometimes better) design quality than NetLogo.
arXiv Detail & Related papers (2020-06-15T23:29:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.