GENIUS: A Novel Solution for Subteam Replacement with Clustering-based
Graph Neural Network
- URL: http://arxiv.org/abs/2211.04100v1
- Date: Tue, 8 Nov 2022 09:02:59 GMT
- Title: GENIUS: A Novel Solution for Subteam Replacement with Clustering-based
Graph Neural Network
- Authors: Chuxuan Hu, Qinghai Zhou, Hanghang Tong
- Abstract summary: Subteam replacement is defined as finding the optimal candidate set of people who can best function as an unavailable subset of members.
We propose GENIUS, a novel clustering-based graph neural network (GNN) framework that can capture team network knowledge for flexible subteam replacement.
- Score: 34.510076775330795
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Subteam replacement is defined as finding the optimal candidate set of people
who can best function as an unavailable subset of members (i.e., subteam) for
certain reasons (e.g., conflicts of interests, employee churn), given a team of
people embedded in a social network working on the same task. Prior
investigations on this problem incorporate graph kernel as the optimal criteria
for measuring the similarity between the new optimized team and the original
team. However, the increasingly abundant social networks reveal fundamental
limitations of existing methods, including (1) the graph kernel-based
approaches are powerless to capture the key intrinsic correlations among node
features, (2) they generally search over the entire network for every member to
be replaced, making it extremely inefficient as the network grows, and (3) the
requirement of equal-sized replacement for the unavailable subteam can be
inapplicable due to limited hiring budget. In this work, we address the
limitations in the state-of-the-art for subteam replacement by (1) proposing
GENIUS, a novel clustering-based graph neural network (GNN) framework that can
capture team network knowledge for flexible subteam replacement, and (2)
equipping the proposed GENIUS with self-supervised positive team contrasting
training scheme to improve the team-level representation learning and
unsupervised node clusters to prune candidates for fast computation. Through
extensive empirical evaluations, we demonstrate the efficacy of the proposed
method (1) effectiveness: being able to select better candidate members that
significantly increase the similarity between the optimized and original teams,
and (2) efficiency: achieving more than 600 times speed-up in average running
time.
Related papers
- Faster Optimal Coalition Structure Generation via Offline Coalition Selection and Graph-Based Search [61.08720171136229]
We present a novel algorithm, SMART, for the problem based on a hybridization of three innovative techniques.
Two of these techniques are based on dynamic programming, where we show a powerful connection between the coalitions selected for evaluation and the performance of the algorithms.
Our techniques bring a new way of approaching the problem and a new level of precision to the field.
arXiv Detail & Related papers (2024-07-22T23:24:03Z) - Asynchronous Message-Passing and Zeroth-Order Optimization Based Distributed Learning with a Use-Case in Resource Allocation in Communication Networks [11.182443036683225]
Distributed learning and adaptation have received significant interest and found wide-ranging applications in machine learning signal processing.
This paper specifically focuses on a scenario where agents collaborate towards a common task.
Agents, acting as transmitters, collaboratively train their individual policies to maximize a global reward.
arXiv Detail & Related papers (2023-11-08T11:12:27Z) - Reinforcement Learning for Node Selection in Branch-and-Bound [52.2648997215667]
Current state-of-the-art selectors utilize either hand-crafted ensembles that automatically switch between naive sub-node selectors, or learned node selectors that rely on individual node data.
We propose a novel simulation technique that uses reinforcement learning (RL) while considering the entire tree state, rather than just isolated nodes.
arXiv Detail & Related papers (2023-09-29T19:55:56Z) - Decentralized Gossip-Based Stochastic Bilevel Optimization over
Communication Networks [42.76623191830371]
We propose a gossip-based distributed bilevel optimization algorithm.
Agents can solve both networked and outer problems in a single time.
Our algorithm achieves the state-of-the-art efficiency and test accuracy.
arXiv Detail & Related papers (2022-06-22T06:38:54Z) - Frequent Itemset-driven Search for Finding Minimum Node Separators in
Complex Networks [61.2383572324176]
We propose a frequent itemset-driven search approach, which integrates the concept of frequent itemset mining in data mining into the well-known memetic search framework.
It iteratively employs the frequent itemset recombination operator to generate promising offspring solution based on itemsets that frequently occur in high-quality solutions.
In particular, it discovers 29 new upper bounds and matches 18 previous best-known bounds.
arXiv Detail & Related papers (2022-01-18T11:16:40Z) - Self-Ensembling GAN for Cross-Domain Semantic Segmentation [107.27377745720243]
This paper proposes a self-ensembling generative adversarial network (SE-GAN) exploiting cross-domain data for semantic segmentation.
In SE-GAN, a teacher network and a student network constitute a self-ensembling model for generating semantic segmentation maps, which together with a discriminator, forms a GAN.
Despite its simplicity, we find SE-GAN can significantly boost the performance of adversarial training and enhance the stability of the model.
arXiv Detail & Related papers (2021-12-15T09:50:25Z) - Unsupervised Domain-adaptive Hash for Networks [81.49184987430333]
Domain-adaptive hash learning has enjoyed considerable success in the computer vision community.
We develop an unsupervised domain-adaptive hash learning method for networks, dubbed UDAH.
arXiv Detail & Related papers (2021-08-20T12:09:38Z) - Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex
Decentralized Optimization Over Time-Varying Networks [79.16773494166644]
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network.
We design two optimal algorithms that attain these lower bounds.
We corroborate the theoretical efficiency of these algorithms by performing an experimental comparison with existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-08T15:54:44Z) - Recurrent Graph Neural Network Algorithm for Unsupervised Network
Community Detection [0.0]
This paper proposes a new variant of the recurrent graph neural network algorithm for unsupervised network community detection through modularity optimization.
The new algorithm's performance is compared against a popular and fast Louvain method and a more efficient but slower Combo algorithm recently proposed by the author.
arXiv Detail & Related papers (2021-03-03T16:50:50Z) - An Efficient Framework for Clustered Federated Learning [26.24231986590374]
We address the problem of federated learning (FL) where users are distributed into clusters.
We propose the Iterative Federated Clustering Algorithm (IFCA)
We show that our algorithm is efficient in non- partitioned problems such as neural networks.
arXiv Detail & Related papers (2020-06-07T08:48:59Z) - Hierarchical clustering of bipartite data sets based on the statistical
significance of coincidences [0.0]
We provide an hierarchical clustering algorithm based on a dissimilarity between entities that quantifies the probability that the features shared by two entities is due to mere chance.
The algorithm performance is $O(n2)$ when applied to a set of n entities, and its outcome is a dendrogram exhibiting the connections of those entities.
arXiv Detail & Related papers (2020-04-27T23:30:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.