Modeling Teams Performance Using Deep Representational Learning on
Graphs
- URL: http://arxiv.org/abs/2206.14741v1
- Date: Wed, 29 Jun 2022 16:12:22 GMT
- Title: Modeling Teams Performance Using Deep Representational Learning on
Graphs
- Authors: Francesco Carli, Pietro Foini, Nicol\`o Gozzi, Nicola Perra, Rossano
Schifanella
- Abstract summary: We propose a graph neural network model designed to predict a team's performance.
The model is based on three architectural channels: topological, centrality, and contextual.
A first mechanism allows pinpointing key members inside the team.
A second mechanism allows us to quantify the contributions of the three driver effects in determining the outcome performance.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The large majority of human activities require collaborations within and
across formal or informal teams. Our understanding of how the collaborative
efforts spent by teams relate to their performance is still a matter of debate.
Teamwork results in a highly interconnected ecosystem of potentially
overlapping components where tasks are performed in interaction with team
members and across other teams. To tackle this problem, we propose a graph
neural network model designed to predict a team's performance while identifying
the drivers that determine such an outcome. In particular, the model is based
on three architectural channels: topological, centrality, and contextual which
capture different factors potentially shaping teams' success. We endow the
model with two attention mechanisms to boost model performance and allow
interpretability. A first mechanism allows pinpointing key members inside the
team. A second mechanism allows us to quantify the contributions of the three
driver effects in determining the outcome performance. We test model
performance on a wide range of domains outperforming most of the classical and
neural baselines considered. Moreover, we include synthetic datasets
specifically designed to validate how the model disentangles the intended
properties on which our model vastly outperforms baselines.
Related papers
- MultiAgent Collaboration Attack: Investigating Adversarial Attacks in Large Language Model Collaborations via Debate [24.92465108034783]
Large Language Models (LLMs) have shown exceptional results on current benchmarks when working individually.
The advancement in their capabilities, along with a reduction in parameter size and inference times, has facilitated the use of these models as agents.
We evaluate the behavior of a network of models collaborating through debate under the influence of an adversary.
arXiv Detail & Related papers (2024-06-20T20:09:37Z) - Corpus Considerations for Annotator Modeling and Scaling [9.263562546969695]
We show that the commonly used user token model consistently outperforms more complex models.
Our findings shed light on the relationship between corpus statistics and annotator modeling performance.
arXiv Detail & Related papers (2024-04-02T22:27:24Z) - Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity
Tracking [53.66999416757543]
We study how fine-tuning affects the internal mechanisms implemented in language models.
Fine-tuning enhances, rather than alters, the mechanistic operation of the model.
arXiv Detail & Related papers (2024-02-22T18:59:24Z) - Body Segmentation Using Multi-task Learning [1.0832844764942349]
We present a novel multi-task model for human segmentation/parsing that involves three tasks.
The main idea behind the proposed--Pose--DensePose model (or SPD for short) is to learn a better segmentation model by sharing knowledge across different, yet related tasks.
The performance of the model is analysed through rigorous experiments on the LIP and ATR datasets and in comparison to a recent (state-of-the-art) multi-task body-segmentation model.
arXiv Detail & Related papers (2022-12-13T13:06:21Z) - Robust Graph Representation Learning via Predictive Coding [46.22695915912123]
Predictive coding is a message-passing framework initially developed to model information processing in the brain.
In this work, we build models that rely on the message-passing rule of predictive coding.
We show that the proposed models are comparable to standard ones in terms of performance in both inductive and transductive tasks.
arXiv Detail & Related papers (2022-12-09T03:58:22Z) - Part-Based Models Improve Adversarial Robustness [57.699029966800644]
We show that combining human prior knowledge with end-to-end learning can improve the robustness of deep neural networks.
Our model combines a part segmentation model with a tiny classifier and is trained end-to-end to simultaneously segment objects into parts.
Our experiments indicate that these models also reduce texture bias and yield better robustness against common corruptions and spurious correlations.
arXiv Detail & Related papers (2022-09-15T15:41:47Z) - UViM: A Unified Modeling Approach for Vision with Learned Guiding Codes [91.24112204588353]
We introduce UViM, a unified approach capable of modeling a wide range of computer vision tasks.
In contrast to previous models, UViM has the same functional form for all tasks.
We demonstrate the effectiveness of UViM on three diverse and challenging vision tasks.
arXiv Detail & Related papers (2022-05-20T17:47:59Z) - Unified Graph Structured Models for Video Understanding [93.72081456202672]
We propose a message passing graph neural network that explicitly models relational-temporal relations.
We show how our method is able to more effectively model relationships between relevant entities in the scene.
arXiv Detail & Related papers (2021-03-29T14:37:35Z) - Consistency-Aware Graph Network for Human Interaction Understanding [17.416289346143948]
We propose a consistency-aware graph network, which combines the representative ability of graph network and the consistency-aware reasoning to facilitate the HIU task.
Our network consists of three components, a backbone CNN to extract image features, a factor graph network to learn third-order interactive relations among participants, and a consistency-aware reasoning module to enforce labeling and grouping consistencies.
arXiv Detail & Related papers (2020-11-20T07:49:21Z) - Rethinking Generalization of Neural Models: A Named Entity Recognition
Case Study [81.11161697133095]
We take the NER task as a testbed to analyze the generalization behavior of existing models from different perspectives.
Experiments with in-depth analyses diagnose the bottleneck of existing neural NER models.
As a by-product of this paper, we have open-sourced a project that involves a comprehensive summary of recent NER papers.
arXiv Detail & Related papers (2020-01-12T04:33:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.