Flat Teams Drive Scientific Innovation
- URL: http://arxiv.org/abs/2201.06726v2
- Date: Wed, 19 Jan 2022 17:26:31 GMT
- Title: Flat Teams Drive Scientific Innovation
- Authors: Fengli Xu, Lingfei Wu, James Evans
- Abstract summary: We show how individual activities cohere into broad roles of leadership through the direction and presentation of research.
The hidden hierarchy of a scientific team is characterized by its lead (or L)-ratio of members playing leadership roles to total team size.
We find that relative to flat, egalitarian teams, tall, hierarchical teams produce less novelty and more often develop existing ideas.
- Score: 43.65818554474622
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With teams growing in all areas of scientific and scholarly research, we
explore the relationship between team structure and the character of knowledge
they produce. Drawing on 89,575 self-reports of team member research activity
underlying scientific publications, we show how individual activities cohere
into broad roles of (1) leadership through the direction and presentation of
research and (2) support through data collection, analysis and discussion. The
hidden hierarchy of a scientific team is characterized by its lead (or L)-ratio
of members playing leadership roles to total team size. The L-ratio is
validated through correlation with imputed contributions to the specific paper
and to science as a whole, which we use to effectively extrapolate the L-ratio
for 16,397,750 papers where roles are not explicit. We find that relative to
flat, egalitarian teams, tall, hierarchical teams produce less novelty and more
often develop existing ideas; increase productivity for those on top and
decrease it for those beneath; increase short-term citations but decrease
long-term influence. These effects hold within-person -- the same person on the
same-sized team produces science much more likely to disruptively innovate if
they work on a flat, high L-ratio team. These results suggest the critical role
flat teams play for sustainable scientific advance and the training and
advancement of scientists.
Related papers
- LLMs Assist NLP Researchers: Critique Paper (Meta-)Reviewing [106.45895712717612]
Large language models (LLMs) have shown remarkable versatility in various generative tasks.
This study focuses on the topic of LLMs assist NLP Researchers.
To our knowledge, this is the first work to provide such a comprehensive analysis.
arXiv Detail & Related papers (2024-06-24T01:30:22Z) - What Can Natural Language Processing Do for Peer Review? [173.8912784451817]
In modern science, peer review is widely used, yet it is hard, time-consuming, and prone to error.
Since the artifacts involved in peer review are largely text-based, Natural Language Processing has great potential to improve reviewing.
We detail each step of the process from manuscript submission to camera-ready revision, and discuss the associated challenges and opportunities for NLP assistance.
arXiv Detail & Related papers (2024-05-10T16:06:43Z) - Mapping the Increasing Use of LLMs in Scientific Papers [99.67983375899719]
We conduct the first systematic, large-scale analysis across 950,965 papers published between January 2020 and February 2024 on the arXiv, bioRxiv, and Nature portfolio journals.
Our findings reveal a steady increase in LLM usage, with the largest and fastest growth observed in Computer Science papers.
arXiv Detail & Related papers (2024-04-01T17:45:15Z) - How should the advent of large language models affect the practice of
science? [51.62881233954798]
How should the advent of large language models affect the practice of science?
We have invited four diverse groups of scientists to reflect on this query, sharing their perspectives and engaging in debate.
arXiv Detail & Related papers (2023-12-05T10:45:12Z) - Towards a Better Understanding of Learning with Multiagent Teams [4.746424588605832]
We show that some team structures help agents learn to specialize into specific roles, resulting in more favorable global results.
Large teams create credit assignment challenges that reduce coordination, leading to large teams performing poorly compared to smaller ones.
arXiv Detail & Related papers (2023-06-28T13:37:48Z) - Automated Mining of Leaderboards for Empirical AI Research [0.0]
This study presents a comprehensive approach for generating Leaderboards for knowledge-graph-based scholarly information organization.
Specifically, we investigate the problem of automated Leaderboard construction using state-of-the-art transformer models, viz. Bert, SciBert, and XLNet.
As a result, a vast share of empirical AI research can be organized in the next-generation digital libraries as knowledge graphs.
arXiv Detail & Related papers (2021-08-31T10:00:52Z) - Team Power and Hierarchy: Understanding Team Success [11.09080707714613]
This research examines in depth the relationships between team power and team success in the field of Computer Science.
By analyzing 4,106,995 CS teams, we find that high power teams with flat structure have the best performance.
On the contrary, low-power teams with hierarchical structure is a facilitator of team performance.
arXiv Detail & Related papers (2021-08-09T15:10:58Z) - What's New? Summarizing Contributions in Scientific Literature [85.95906677964815]
We introduce a new task of disentangled paper summarization, which seeks to generate separate summaries for the paper contributions and the context of the work.
We extend the S2ORC corpus of academic articles by adding disentangled "contribution" and "context" reference labels.
We propose a comprehensive automatic evaluation protocol which reports the relevance, novelty, and disentanglement of generated outputs.
arXiv Detail & Related papers (2020-11-06T02:23:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.