Trust in Motion: Capturing Trust Ascendancy in Open-Source Projects
using Hybrid AI
- URL: http://arxiv.org/abs/2210.02656v1
- Date: Thu, 6 Oct 2022 03:23:24 GMT
- Title: Trust in Motion: Capturing Trust Ascendancy in Open-Source Projects
using Hybrid AI
- Authors: Huascar Sanchez and Briland Hitaj
- Abstract summary: This paper describes a methodology for understanding the notion of trust ascendancy.
It introduces the capabilities that are needed to localize trust ascendancy operations happening over open-source projects.
Preliminary results show the effectiveness of our method at capturing the trust ascendancy developed by individuals involved in a well-documented 2020 social engineering attack.
- Score: 0.5156484100374058
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Open-source is frequently described as a driver for unprecedented
communication and collaboration, and the process works best when projects
support teamwork. Yet, their cooperation processes in no way protect project
contributors from considerations of trust, power, and influence. Indeed,
achieving the level of trust necessary to contribute to a project and thus
influence its direction is a constant process of change, and developers take
many different routes over many communication channels to achieve it. We refer
to this process of influence-seeking and trust-building, trust ascendancy.
This paper describes a methodology for understanding the notion of trust
ascendancy, and introduces the capabilities that are needed to localizing trust
ascendancy operations happening over open-source projects. Much of the prior
work in understanding trust in open-source software development has focused on
a static view of the problem, and study it using different forms of quantity
measures. However, trust ascendancy is not static but rather adapt to changes
in the open-source ecosystem in response to developer role changes, new
functionality, new technologies, and so on. This paper is the first attempt to
articulate and study these signals, from a dynamic view of the problem. In that
respect, we identify related work that may help illuminate research challenges,
implementation tradeoffs, and complementary solutions.
Our preliminary results show the effectiveness of our method at capturing the
trust ascendancy developed by individuals involved in a well-documented 2020
social engineering attack. Our future plans highlight research challenges, and
encourage cross-disciplinary collaboration to create more automated, accurate,
and efficient ways to modeling and then tracking trust ascendancy in
open-source projects.
Related papers
- Characterising Open Source Co-opetition in Company-hosted Open Source Software Projects: The Cases of PyTorch, TensorFlow, and Transformers [5.2337753974570616]
Companies, including market rivals, have long collaborated on the development of open source software (OSS)
"Open source co-opetition" results in a tangle of co-operation and competition known as "open source co-opetition"
arXiv Detail & Related papers (2024-10-23T19:35:41Z) - A Roadmap for Software Testing in Open Collaborative Development Environments [14.113209837391183]
The distributed nature of open collaborative development, along with its diverse contributors and rapid iterations, presents new challenges for ensuring software quality.
This paper offers a comprehensive review and analysis of recent advancements in software quality assurance within open collaborative development environments.
arXiv Detail & Related papers (2024-06-08T10:50:24Z) - Decentralized and Lifelong-Adaptive Multi-Agent Collaborative Learning [57.652899266553035]
Decentralized and lifelong-adaptive multi-agent collaborative learning aims to enhance collaboration among multiple agents without a central server.
We propose DeLAMA, a decentralized multi-agent lifelong collaborative learning algorithm with dynamic collaboration graphs.
arXiv Detail & Related papers (2024-03-11T09:21:11Z) - A Factor Graph Model of Trust for a Collaborative Multi-Agent System [8.286807697708113]
Trust is the reliance and confidence an agent has in the information, behaviors, intentions, truthfulness, and capabilities of others within the system.
This paper introduces a new graphical approach that utilizes factor graphs to represent the interdependent behaviors and trustworthiness among agents.
Our method for evaluating trust is decentralized and considers key interdependent sub-factors such as proximity safety, consistency, and cooperation.
arXiv Detail & Related papers (2024-02-10T21:44:28Z) - Experiential Co-Learning of Software-Developing Agents [83.34027623428096]
Large language models (LLMs) have brought significant changes to various domains, especially in software development.
We introduce Experiential Co-Learning, a novel LLM-agent learning framework.
Experiments demonstrate that the framework enables agents to tackle unseen software-developing tasks more effectively.
arXiv Detail & Related papers (2023-12-28T13:50:42Z) - Code Ownership in Open-Source AI Software Security [18.779538756226298]
We use code ownership metrics to investigate the correlation with latent vulnerabilities across five prominent open-source AI software projects.
The findings suggest a positive relationship between high-level ownership (characterised by a limited number of minor contributors) and a decrease in vulnerabilities.
With these novel code ownership metrics, we have implemented a Python-based command-line application to aid project curators and quality assurance professionals in evaluating and benchmarking their on-site projects.
arXiv Detail & Related papers (2023-12-18T00:37:29Z) - Interpersonal Trust in OSS: Exploring Dimensions of Trust in GitHub Pull
Requests [10.372820248341746]
Interpersonal trust plays a crucial role in facilitating collaborative tasks, such as software development.
Previous research recognizes the significance of trust in an organizational setting, but there is a lack of understanding in how trust is exhibited in distributed teams.
To foster trust and collaboration in OSS teams, we need to understand what trust is and how it is exhibited in written developer communications.
arXiv Detail & Related papers (2023-11-08T15:40:10Z) - A Diachronic Perspective on User Trust in AI under Uncertainty [52.44939679369428]
Modern NLP systems are often uncalibrated, resulting in confidently incorrect predictions that undermine user trust.
We study the evolution of user trust in response to trust-eroding events using a betting game.
arXiv Detail & Related papers (2023-10-20T14:41:46Z) - Building Cooperative Embodied Agents Modularly with Large Language
Models [104.57849816689559]
We address challenging multi-agent cooperation problems with decentralized control, raw sensory observations, costly communication, and multi-objective tasks instantiated in various embodied environments.
We harness the commonsense knowledge, reasoning ability, language comprehension, and text generation prowess of LLMs and seamlessly incorporate them into a cognitive-inspired modular framework.
Our experiments on C-WAH and TDW-MAT demonstrate that CoELA driven by GPT-4 can surpass strong planning-based methods and exhibit emergent effective communication.
arXiv Detail & Related papers (2023-07-05T17:59:27Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Where Does Trust Break Down? A Quantitative Trust Analysis of Deep
Neural Networks via Trust Matrix and Conditional Trust Densities [94.65749466106664]
We introduce the concept of trust matrix, a novel trust quantification strategy.
A trust matrix defines the expected question-answer trust for a given actor-oracle answer scenario.
We further extend the concept of trust densities with the notion of conditional trust densities.
arXiv Detail & Related papers (2020-09-30T14:33:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.