Preliminary Guideline for Creating Boundary Artefacts in Software
Engineering
- URL: http://arxiv.org/abs/2306.05755v1
- Date: Fri, 9 Jun 2023 08:34:38 GMT
- Title: Preliminary Guideline for Creating Boundary Artefacts in Software
Engineering
- Authors: Raquel Ouriques, Fabian Fagerholm, Daniel Mendez, Tony Gorschek,
Baldvin Gislason Bern
- Abstract summary: Boundary Artefacts (BAs) can supply stakeholders with different boundaries, facilitating collaboration among social worlds.
When artefacts display inconsistencies, such as incorrect information, the practitioners have decreased trust in the BA.
This study aimed at develop and validate a preliminary guideline support the creation of trustworthy BAs.
- Score: 2.744809069021081
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Context: Software development benefits from having Boundary Artefacts (BAs),
as a single artefact can supply stakeholders with different boundaries,
facilitating collaboration among social worlds. When those artefacts display
inconsistencies, such as incorrect information, the practitioners have
decreased trust in the BA. As trust is an essential factor guiding the
utilisation of BAs in software projects, it is necessary to understand which
principles should be observed when creating them. Objective: This study aimed
at develop and validate a preliminary guideline support the creation of
trustworthy BAs. Method: We followed a multi-step approach. We developed our
guideline through a literature review and previous results from our case study.
Second, we submitted the guideline for an expert evaluation via two workshops
and a survey. At last, we adjusted our guideline by incorporating the feedback
obtained during the workshops. Results: We grouped the principles collected
from a literature review into three categories. The first category (Scope)
focuses on the scope, displaying principles referring to defining each
boundary's target audience, needs, and terminology. The second category
(Structure) relates to how the artefact's content is structured to meet
stakeholders' needs. The third (Management) refers to principles that can guide
the establishment of practices to manage the artefact throughout time. The
expert validation revealed that the principles contribute to creating
trustworthy BAs at different levels. Also, the relevance of the guideline and
its usefulness. Conclusions: The guideline strengthen BA traits such as shared
understanding, plasticity and ability to transfer. Practitioners can utilise
the guideline to guide the creation or even evaluate current practices for
existing BAs.
Related papers
- ATR-Bench: A Federated Learning Benchmark for Adaptation, Trust, and Reasoning [21.099779419619345]
We introduce a unified framework for analyzing federated learning through three foundational dimensions: Adaptation, Trust, and Reasoning.<n>ATR-Bench lays the groundwork for a systematic and holistic evaluation of federated learning with real-world relevance.
arXiv Detail & Related papers (2025-05-22T16:11:38Z) - GuideBench: Benchmarking Domain-Oriented Guideline Following for LLM Agents [22.390137173904943]
Large language models (LLMs) have been widely deployed as autonomous agents capable of following user instructions and making decisions in real-world applications.<n>GuideBench is a benchmark designed to evaluate guideline following performance of LLMs.
arXiv Detail & Related papers (2025-05-16T15:32:23Z) - Identifying Aspects in Peer Reviews [61.374437855024844]
We develop a data-driven schema for deriving fine-grained aspects from a corpus of peer reviews.
We introduce a dataset of peer reviews augmented with aspects and show how it can be used for community-level review analysis.
arXiv Detail & Related papers (2025-04-09T14:14:42Z) - The Dual-Edged Sword of Technical Debt: Benefits and Issues Analyzed Through Developer Discussions [8.304493605883744]
Technical debt (TD) has long been one of the key factors influencing the maintainability of software products.
This work is to collectively investigate the practitioners' opinions on the various perspectives of TD from a large collection of articles.
arXiv Detail & Related papers (2024-07-30T17:54:36Z) - IntCoOp: Interpretability-Aware Vision-Language Prompt Tuning [94.52149969720712]
IntCoOp learns to jointly align attribute-level inductive biases and class embeddings during prompt-tuning.
IntCoOp improves CoOp by 7.35% in average performance across 10 diverse datasets.
arXiv Detail & Related papers (2024-06-19T16:37:31Z) - LEARN: Knowledge Adaptation from Large Language Model to Recommendation for Practical Industrial Application [54.984348122105516]
Llm-driven knowlEdge Adaptive RecommeNdation (LEARN) framework synergizes open-world knowledge with collaborative knowledge.
We propose an Llm-driven knowlEdge Adaptive RecommeNdation (LEARN) framework that synergizes open-world knowledge with collaborative knowledge.
arXiv Detail & Related papers (2024-05-07T04:00:30Z) - A collection of principles for guiding and evaluating large language
models [5.412690203810726]
We identify and curate a list of 220 principles from literature, and derive a set of 37 core principles organized into seven categories.
We conduct a small-scale expert survey, eliciting the subjective importance experts assign to different principles.
We envision that the development of a shared model of principles can serve multiple purposes.
arXiv Detail & Related papers (2023-12-04T12:06:12Z) - Unity is Strength: Cross-Task Knowledge Distillation to Improve Code
Review Generation [0.9208007322096533]
We propose a novel deep-learning architecture, DISCOREV, based on cross-task knowledge distillation.
In our approach, the fine-tuning of the comment generation model is guided by the code refinement model.
Our results show that our approach generates better review comments as measured by the BLEU score.
arXiv Detail & Related papers (2023-09-06T21:10:33Z) - Context-faithful Prompting for Large Language Models [51.194410884263135]
Large language models (LLMs) encode parametric knowledge about world facts.
Their reliance on parametric knowledge may cause them to overlook contextual cues, leading to incorrect predictions in context-sensitive NLP tasks.
We assess and enhance LLMs' contextual faithfulness in two aspects: knowledge conflict and prediction with abstention.
arXiv Detail & Related papers (2023-03-20T17:54:58Z) - Prior Knowledge Guided Unsupervised Domain Adaptation [82.9977759320565]
We propose a Knowledge-guided Unsupervised Domain Adaptation (KUDA) setting where prior knowledge about the target class distribution is available.
In particular, we consider two specific types of prior knowledge about the class distribution in the target domain: Unary Bound and Binary Relationship.
We propose a rectification module that uses such prior knowledge to refine model generated pseudo labels.
arXiv Detail & Related papers (2022-07-18T18:41:36Z) - Revise and Resubmit: An Intertextual Model of Text-based Collaboration
in Peer Review [52.359007622096684]
Peer review is a key component of the publishing process in most fields of science.
Existing NLP studies focus on the analysis of individual texts.
editorial assistance often requires modeling interactions between pairs of texts.
arXiv Detail & Related papers (2022-04-22T16:39:38Z) - A Comprehensive Survey on Knowledge Graph Entity Alignment via
Representation Learning [39.401580902256626]
This paper provides a tutorial-type survey on representative entity alignment techniques.
We propose two datasets to address the limitation of existing benchmark datasets.
We conduct extensive experiments using the proposed datasets.
arXiv Detail & Related papers (2021-03-28T06:23:48Z) - Hierarchical Bi-Directional Self-Attention Networks for Paper Review
Rating Recommendation [81.55533657694016]
We propose a Hierarchical bi-directional self-attention Network framework (HabNet) for paper review rating prediction and recommendation.
Specifically, we leverage the hierarchical structure of the paper reviews with three levels of encoders: sentence encoder (level one), intra-review encoder (level two) and inter-review encoder (level three)
We are able to identify useful predictors to make the final acceptance decision, as well as to help discover the inconsistency between numerical review ratings and text sentiment conveyed by reviewers.
arXiv Detail & Related papers (2020-11-02T08:07:50Z) - Principles to Practices for Responsible AI: Closing the Gap [0.1749935196721634]
We argue that an impact assessment framework is a promising approach to close the principles-to-practices gap.
We review a case study of AI's use in forest ecosystem restoration, demonstrating how an impact assessment framework can translate into effective and responsible AI practices.
arXiv Detail & Related papers (2020-06-08T16:04:44Z) - Common Sense or World Knowledge? Investigating Adapter-Based Knowledge
Injection into Pretrained Transformers [54.417299589288184]
We investigate models for complementing the distributional knowledge of BERT with conceptual knowledge from ConceptNet and its corresponding Open Mind Common Sense (OMCS) corpus.
Our adapter-based models substantially outperform BERT on inference tasks that require the type of conceptual knowledge explicitly present in ConceptNet and OMCS.
arXiv Detail & Related papers (2020-05-24T15:49:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.