Interpersonal Trust in OSS: Exploring Dimensions of Trust in GitHub Pull
Requests
- URL: http://arxiv.org/abs/2311.04767v1
- Date: Wed, 8 Nov 2023 15:40:10 GMT
- Title: Interpersonal Trust in OSS: Exploring Dimensions of Trust in GitHub Pull
Requests
- Authors: Amirali Sajadi, Kostadin Damevski, Preetha Chatterjee
- Abstract summary: Interpersonal trust plays a crucial role in facilitating collaborative tasks, such as software development.
Previous research recognizes the significance of trust in an organizational setting, but there is a lack of understanding in how trust is exhibited in distributed teams.
To foster trust and collaboration in OSS teams, we need to understand what trust is and how it is exhibited in written developer communications.
- Score: 10.372820248341746
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Interpersonal trust plays a crucial role in facilitating collaborative tasks,
such as software development. While previous research recognizes the
significance of trust in an organizational setting, there is a lack of
understanding in how trust is exhibited in OSS distributed teams, where there
is an absence of direct, in-person communications. To foster trust and
collaboration in OSS teams, we need to understand what trust is and how it is
exhibited in written developer communications (e.g., pull requests, chats). In
this paper, we first investigate various dimensions of trust to identify the
ways trusting behavior can be observed in OSS. Next, we sample a set of 100
GitHub pull requests from Apache Software Foundation (ASF) projects, to analyze
and demonstrate how each dimension of trust can be exhibited. Our findings
provide preliminary insights into cues that might be helpful to automatically
assess team dynamics and establish interpersonal trust in OSS teams, leading to
successful and sustainable OSS.
Related papers
- Fostering Trust and Quantifying Value of AI and ML [0.0]
Much has been discussed about trusting AI and ML inferences, but little has been done to define what that means.
producing ever more trustworthy machine learning inferences is a path to increase the value of products.
arXiv Detail & Related papers (2024-07-08T13:25:28Z) - Bayesian Methods for Trust in Collaborative Multi-Agent Autonomy [11.246557832016238]
In safety-critical and contested environments, adversaries may infiltrate and compromise a number of agents.
We analyze state of the art multi-target tracking algorithms under this compromised agent threat model.
We design a trust estimation framework using hierarchical Bayesian updating.
arXiv Detail & Related papers (2024-03-25T17:17:35Z) - TrustGuard: GNN-based Robust and Explainable Trust Evaluation with
Dynamicity Support [59.41529066449414]
We propose TrustGuard, a GNN-based accurate trust evaluation model that supports trust dynamicity.
TrustGuard is designed with a layered architecture that contains a snapshot input layer, a spatial aggregation layer, a temporal aggregation layer, and a prediction layer.
Experiments show that TrustGuard outperforms state-of-the-art GNN-based trust evaluation models with respect to trust prediction across single-timeslot and multi-timeslot.
arXiv Detail & Related papers (2023-06-23T07:39:12Z) - TrustGNN: Graph Neural Network based Trust Evaluation via Learnable
Propagative and Composable Nature [63.78619502896071]
Trust evaluation is critical for many applications such as cyber security, social communication and recommender systems.
We propose a new GNN based trust evaluation method named TrustGNN, which integrates smartly the propagative and composable nature of trust graphs.
Specifically, TrustGNN designs specific propagative patterns for different propagative processes of trust, and distinguishes the contribution of different propagative processes to create new trust.
arXiv Detail & Related papers (2022-05-25T13:57:03Z) - Personalized multi-faceted trust modeling to determine trust links in
social media and its potential for misinformation management [61.88858330222619]
We present an approach for predicting trust links between peers in social media.
We propose a data-driven multi-faceted trust modeling which incorporates many distinct features for a comprehensive analysis.
Illustrated in a trust-aware item recommendation task, we evaluate the proposed framework in the context of a large Yelp dataset.
arXiv Detail & Related papers (2021-11-11T19:40:51Z) - On the Importance of Trust in Next-Generation Networked CPS Systems: An
AI Perspective [2.1055643409860734]
We propose trust as a measure to evaluate the status of network agents and improve the decision-making process.
Trust relations are based on evidence created by the interactions of entities within a protocol.
We show how utilizing the trust evidence can improve the performance and the security of Federated Learning.
arXiv Detail & Related papers (2021-04-16T02:12:13Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z) - Where Does Trust Break Down? A Quantitative Trust Analysis of Deep
Neural Networks via Trust Matrix and Conditional Trust Densities [94.65749466106664]
We introduce the concept of trust matrix, a novel trust quantification strategy.
A trust matrix defines the expected question-answer trust for a given actor-oracle answer scenario.
We further extend the concept of trust densities with the notion of conditional trust densities.
arXiv Detail & Related papers (2020-09-30T14:33:43Z) - How Much Can We Really Trust You? Towards Simple, Interpretable Trust
Quantification Metrics for Deep Neural Networks [94.65749466106664]
We conduct a thought experiment and explore two key questions about trust in relation to confidence.
We introduce a suite of metrics for assessing the overall trustworthiness of deep neural networks based on their behaviour when answering a set of questions.
The proposed metrics are by no means perfect, but the hope is to push the conversation towards better metrics.
arXiv Detail & Related papers (2020-09-12T17:37:36Z) - Towards Time-Aware Context-Aware Deep Trust Prediction in Online Social
Networks [0.4061135251278187]
Trust can be defined as a measure to determine which source of information is reliable and with whom we should share or from whom we should accept information.
There are several applications for trust in Online Social Networks (OSNs), including social spammer detection, fake news detection, retweet behaviour detection and recommender systems.
Trust prediction is the process of predicting a new trust relation between two users who are not currently connected.
arXiv Detail & Related papers (2020-03-21T01:00:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.