WeaveNet for Approximating Two-sided Matching Problems
- URL: http://arxiv.org/abs/2310.12515v1
- Date: Thu, 19 Oct 2023 06:32:12 GMT
- Title: WeaveNet for Approximating Two-sided Matching Problems
- Authors: Shusaku Sone, Jiaxin Ma, Atsushi Hashimoto, Naoya Chiba, Yoshitaka
Ushiku
- Abstract summary: This paper proposes a novel graph neural network (GNN), textitWeaveNet, designed for bipartite graphs.
Our model reached a comparative performance with state-of-the-art algorithms specially designed for stable matching for small numbers of agents.
- Score: 16.014942651874815
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Matching, a task to optimally assign limited resources under constraints, is
a fundamental technology for society. The task potentially has various
objectives, conditions, and constraints; however, the efficient neural network
architecture for matching is underexplored. This paper proposes a novel graph
neural network (GNN), \textit{WeaveNet}, designed for bipartite graphs. Since a
bipartite graph is generally dense, general GNN architectures lose node-wise
information by over-smoothing when deeply stacked. Such a phenomenon is
undesirable for solving matching problems. WeaveNet avoids it by preserving
edge-wise information while passing messages densely to reach a better
solution. To evaluate the model, we approximated one of the \textit{strongly
NP-hard} problems, \textit{fair stable matching}. Despite its inherent
difficulties and the network's general purpose design, our model reached a
comparative performance with state-of-the-art algorithms specially designed for
stable matching for small numbers of agents.
Related papers
- A GREAT Architecture for Edge-Based Graph Problems Like TSP [8.922883855120416]
Graph neural networks (GNNs) are not well suited to operate on dense graphs, such as in routing problems.
We propose a novel GNN-related edge-based neural model called Graph Edge Attention Network (GREAT)
We show GREAT can produce sparse graphs while keeping most of the optimal edges.
arXiv Detail & Related papers (2024-08-29T17:07:43Z) - LinSATNet: The Positive Linear Satisfiability Neural Networks [116.65291739666303]
This paper studies how to introduce the popular positive linear satisfiability to neural networks.
We propose the first differentiable satisfiability layer based on an extension of the classic Sinkhorn algorithm for jointly encoding multiple sets of marginal distributions.
arXiv Detail & Related papers (2024-07-18T22:05:21Z) - Combinatorial Optimization with Automated Graph Neural Networks [28.19349828026972]
We present a new class of textbfAUTOmated textbfGNNs for solving NP-hard CO problems, namely textbfAutoGNP.
The idea of AutoGNP is to use graph neural architecture search algorithms to automatically find the best GNNs for a given NP-hard optimization problem.
arXiv Detail & Related papers (2024-06-05T02:43:41Z) - Spectral Greedy Coresets for Graph Neural Networks [61.24300262316091]
The ubiquity of large-scale graphs in node-classification tasks hinders the real-world applications of Graph Neural Networks (GNNs)
This paper studies graph coresets for GNNs and avoids the interdependence issue by selecting ego-graphs based on their spectral embeddings.
Our spectral greedy graph coreset (SGGC) scales to graphs with millions of nodes, obviates the need for model pre-training, and applies to low-homophily graphs.
arXiv Detail & Related papers (2024-05-27T17:52:12Z) - Network Interdiction Goes Neural [26.308933674471895]
Network interdiction problems are optimization problems involving two players.
One aims to solve an optimization problem on a network, while the other seeks to modify the network to thwart the first player's objectives.
arXiv Detail & Related papers (2024-05-26T02:34:26Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - A Robust Stacking Framework for Training Deep Graph Models with
Multifaceted Node Features [61.92791503017341]
Graph Neural Networks (GNNs) with numerical node features and graph structure as inputs have demonstrated superior performance on various supervised learning tasks with graph data.
The best models for such data types in most standard supervised learning settings with IID (non-graph) data are not easily incorporated into a GNN.
Here we propose a robust stacking framework that fuses graph-aware propagation with arbitrary models intended for IID data.
arXiv Detail & Related papers (2022-06-16T22:46:33Z) - Learning to Drop: Robust Graph Neural Network via Topological Denoising [50.81722989898142]
We propose PTDNet, a parameterized topological denoising network, to improve the robustness and generalization performance of Graph Neural Networks (GNNs)
PTDNet prunes task-irrelevant edges by penalizing the number of edges in the sparsified graph with parameterized networks.
We show that PTDNet can improve the performance of GNNs significantly and the performance gain becomes larger for more noisy datasets.
arXiv Detail & Related papers (2020-11-13T18:53:21Z) - Alleviating the Inconsistency Problem of Applying Graph Neural Network
to Fraud Detection [78.88163190021798]
We introduce a new GNN framework, $mathsfGraphConsis$, to tackle the inconsistency problem.
Empirical analysis on four datasets indicates the inconsistency problem is crucial in a fraud detection task.
We also released a GNN-based fraud detection toolbox with implementations of SOTA models.
arXiv Detail & Related papers (2020-05-01T21:43:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.