Weak Ties Explain Open Source Innovation
- URL: http://arxiv.org/abs/2411.05646v1
- Date: Fri, 08 Nov 2024 15:39:33 GMT
- Title: Weak Ties Explain Open Source Innovation
- Authors: Hongbo Fang, Patrick Park, James Evans, James Herbsleb, Bogdan Vasilescu,
- Abstract summary: We study the correlation between developers' knowledge acquisition through three distinct interaction networks on GitHub and the innovativeness of the projects they develop.
Our findings suggest that the diversity of projects in which developers engage positively with the innovativeness of their future project developments, whereas the volume of interactions exerts minimal influence.
- Score: 9.399494734600164
- License:
- Abstract: In a real-world social network, weak ties (reflecting low-intensity, infrequent interactions) act as bridges and connect people to different social circles, giving them access to diverse information and opportunities that are not available within one's immediate, close-knit vicinity. Weak ties can be crucial for creativity and innovation, as it introduces new ideas and approaches that people can then combine in novel ways, leading to innovative solutions and creative breakthroughs. Do weak ties facilitate creativity in software in similar ways? In this paper, we show that the answer is ``yes.'' Concretely, we study the correlation between developers' knowledge acquisition through three distinct interaction networks on GitHub and the innovativeness of the projects they develop, across over 38,000 Python projects hosted on GitHub. Our findings suggest that the diversity of projects in which developers engage correlates positively with the innovativeness of their future project developments, whereas the volume of interactions exerts minimal influence. Notably, acquiring knowledge through weak interactions (e.g., starring) as opposed to strong ones (e.g., committing) emerges as a stronger predictor of future novelty.
Related papers
- AI Can Enhance Creativity in Social Networks [1.8317588605009203]
We trained a model that predicts people's ideation performances using semantic and network-structural features.
SocialMuse maximizes people's predicted performances to generate peer recommendations for them.
We found treatment networks leveraging SocialMuse outperformed AI-agnostic control networks in several creativity measures.
arXiv Detail & Related papers (2024-10-20T03:33:25Z) - Collective Innovation in Groups of Large Language Models [28.486116730339972]
We study Large Language Models (LLMs) that play Little Alchemy 2, a creative video game originally developed for humans.
We study groups of LLMs that share information related to their behaviour and focus on the effect of social connectivity on collective performance.
Our work reveals opportunities and challenges for future studies of collective innovation that are becoming increasingly relevant as Generative Artificial Intelligence algorithms and humans innovate alongside each other.
arXiv Detail & Related papers (2024-07-07T13:59:46Z) - A Survey of Neural Code Intelligence: Paradigms, Advances and Beyond [84.95530356322621]
This survey presents a systematic review of the advancements in code intelligence.
It covers over 50 representative models and their variants, more than 20 categories of tasks, and an extensive coverage of over 680 related works.
Building on our examination of the developmental trajectories, we further investigate the emerging synergies between code intelligence and broader machine intelligence.
arXiv Detail & Related papers (2024-03-21T08:54:56Z) - Exchange-of-Thought: Enhancing Large Language Model Capabilities through
Cross-Model Communication [76.04373033082948]
Large Language Models (LLMs) have recently made significant strides in complex reasoning tasks through the Chain-of-Thought technique.
We propose Exchange-of-Thought (EoT), a novel framework that enables cross-model communication during problem-solving.
arXiv Detail & Related papers (2023-12-04T11:53:56Z) - Beyond Factuality: A Comprehensive Evaluation of Large Language Models
as Knowledge Generators [78.63553017938911]
Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks.
However, community concerns abound regarding the factuality and potential implications of using this uncensored knowledge.
We introduce CONNER, designed to evaluate generated knowledge from six important perspectives.
arXiv Detail & Related papers (2023-10-11T08:22:37Z) - Redefining Relationships in Music [55.478320310047785]
We argue that AI tools will fundamentally reshape our music culture.
People working in this space could decrease the possible negative impacts on the practice, consumption and meaning of music.
arXiv Detail & Related papers (2022-12-13T19:44:32Z) - The language and social behavior of innovators [0.0]
We analyze about 38,000 posts available in the intranet forum of a large multinational company.
We find that innovators write more, use a more complex language, introduce new concepts/ideas, and use positive but factual-based language.
arXiv Detail & Related papers (2022-09-20T07:01:25Z) - Social Network Structure Shapes Innovation: Experience-sharing in RL
with SAPIENS [16.388726429030346]
In dynamic topologies, humans oscillate between innovating individually or in small clusters, and then sharing outcomes with others.
We show that experience sharing within a dynamic topology achieves the highest level of innovation across tasks.
These contributions can advance our understanding of optimal AI-AI, human-human, and human-AI collaborative networks.
arXiv Detail & Related papers (2022-06-10T12:47:45Z) - WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model [74.4875156387271]
We develop a novel foundation model pre-trained with huge multimodal (visual and textual) data.
We show that state-of-the-art results can be obtained on a wide range of downstream tasks.
arXiv Detail & Related papers (2021-10-27T12:25:21Z) - Diversity of Skills and Collective Intelligence in GitHub [0.0]
We find that diversity of skills plays an essential role in the creation of links among users who exchange information.
The connections in networks related to actual coding are established among users with similar characteristics.
arXiv Detail & Related papers (2021-10-13T13:55:40Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.