Balanced Order Batching with Task-Oriented Graph Clustering
- URL: http://arxiv.org/abs/2008.09018v1
- Date: Wed, 19 Aug 2020 08:42:50 GMT
- Title: Balanced Order Batching with Task-Oriented Graph Clustering
- Authors: Lu Duan, Haoyuan Hu, Zili Wu, Guozheng Li, Xinhang Zhang, Yu Gong,
Yinghui Xu
- Abstract summary: We propose an end-to-end learning and optimization framework named Balanced Task- Clustering Network (BTOGCN) to solve the Balanced Order Estimation Problem (BOBP)
BOBP arises from the process of picking in Cainiao, the largest logistics platform in China.
- Score: 28.05598654297136
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Balanced order batching problem (BOBP) arises from the process of warehouse
picking in Cainiao, the largest logistics platform in China. Batching orders
together in the picking process to form a single picking route, reduces travel
distance. The reason for its importance is that order picking is a labor
intensive process and, by using good batching methods, substantial savings can
be obtained. The BOBP is a NP-hard combinational optimization problem and
designing a good problem-specific heuristic under the quasi-real-time system
response requirement is non-trivial. In this paper, rather than designing
heuristics, we propose an end-to-end learning and optimization framework named
Balanced Task-orientated Graph Clustering Network (BTOGCN) to solve the BOBP by
reducing it to balanced graph clustering optimization problem. In BTOGCN, a
task-oriented estimator network is introduced to guide the type-aware
heterogeneous graph clustering networks to find a better clustering result
related to the BOBP objective. Through comprehensive experiments on
single-graph and multi-graphs, we show: 1) our balanced task-oriented graph
clustering network can directly utilize the guidance of target signal and
outperforms the two-stage deep embedding and deep clustering method; 2) our
method obtains an average 4.57m and 0.13m picking distance ("m" is the
abbreviation of the meter (the SI base unit of length)) reduction than the
expert-designed algorithm on single and multi-graph set and has a good
generalization ability to apply in practical scenario.
Related papers
- Deep Cut-informed Graph Embedding and Clustering [36.17182061654739]
We propose an innovative and non-GNN-based Deep Cut-informed Graph embedding and Clustering framework, namely DCGC.
For the encoding module, we derive a cut-informed graph embedding objective to fuse graph structure and attributes by minimizing their joint normalized cut.
For the clustering module, we utilize the optimal transport theory to obtain the clustering assignments.
arXiv Detail & Related papers (2025-03-09T14:24:09Z) - BN-Pool: a Bayesian Nonparametric Approach to Graph Pooling [6.952045528182883]
We introduce BN-Pool, the first clustering-based pooling method for Graph Neural Networks (GNNs)
BN-Pool adaptively determines the number of supernodes in a coarsened graph.
We show that BN-Pool achieves superior performance across diverse benchmarks.
arXiv Detail & Related papers (2025-01-16T20:15:12Z) - Large Scale Constrained Clustering With Reinforcement Learning [1.3597551064547502]
Given a network, allocating resources at clusters level, rather than at each node, enhances efficiency in resource allocation and usage.
We propose an approach to solve this constrained clustering problem via reinforcement learning.
In the results section, we show that our algorithm finds near optimal solutions, even for large scale instances.
arXiv Detail & Related papers (2024-02-15T18:27:18Z) - Sample-Efficient Clustering and Conquer Procedures for Parallel
Large-Scale Ranking and Selection [0.0]
In parallel computing environments, correlation-based clustering can achieve an $mathcalO(p)$ sample complexity reduction rate.
In large-scale AI applications such as neural architecture search, a screening-free version of our procedure surprisingly surpasses fully-sequential benchmarks in terms of sample efficiency.
arXiv Detail & Related papers (2024-02-03T15:56:03Z) - Symmetry-preserving graph attention network to solve routing problems at
multiple resolutions [1.9304772860080408]
We introduce the first-ever completely equivariant model and training to solve problems.
It is essential to capture the multiscale structure of the input graph.
We propose a Multiresolution scheme in combination with Equi Graph Attention network (mEGAT) architecture.
arXiv Detail & Related papers (2023-10-24T06:22:20Z) - T-GAE: Transferable Graph Autoencoder for Network Alignment [79.89704126746204]
T-GAE is a graph autoencoder framework that leverages transferability and stability of GNNs to achieve efficient network alignment without retraining.
Our experiments demonstrate that T-GAE outperforms the state-of-the-art optimization method and the best GNN approach by up to 38.7% and 50.8%, respectively.
arXiv Detail & Related papers (2023-10-05T02:58:29Z) - Let the Flows Tell: Solving Graph Combinatorial Optimization Problems
with GFlowNets [86.43523688236077]
Combinatorial optimization (CO) problems are often NP-hard and out of reach for exact algorithms.
GFlowNets have emerged as a powerful machinery to efficiently sample from composite unnormalized densities sequentially.
In this paper, we design Markov decision processes (MDPs) for different problems and propose to train conditional GFlowNets to sample from the solution space.
arXiv Detail & Related papers (2023-05-26T15:13:09Z) - One-step Bipartite Graph Cut: A Normalized Formulation and Its
Application to Scalable Subspace Clustering [56.81492360414741]
We show how to enforce a one-step normalized cut for bipartite graphs, especially with linear-time complexity.
In this paper, we first characterize a novel one-step bipartite graph cut criterion with normalized constraints, and theoretically prove its equivalence to a trace problem.
We extend this cut criterion to a scalable subspace clustering approach, where adaptive anchor learning, bipartite graph learning, and one-step normalized bipartite graph partitioning are simultaneously modeled.
arXiv Detail & Related papers (2023-05-12T11:27:20Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Arch-Graph: Acyclic Architecture Relation Predictor for
Task-Transferable Neural Architecture Search [96.31315520244605]
Arch-Graph is a transferable NAS method that predicts task-specific optimal architectures.
We show Arch-Graph's transferability and high sample efficiency across numerous tasks.
It is able to find top 0.16% and 0.29% architectures on average on two search spaces under the budget of only 50 models.
arXiv Detail & Related papers (2022-04-12T16:46:06Z) - GLAN: A Graph-based Linear Assignment Network [29.788755291070462]
We propose a learnable linear assignment solver based on deep graph networks.
The experimental results on a synthetic dataset reveal that our method outperforms state-of-the-art baselines.
We also embed the proposed solver into a popular multi-object tracking (MOT) framework to train the tracker in an end-to-end manner.
arXiv Detail & Related papers (2022-01-05T13:18:02Z) - Improved Acyclicity Reasoning for Bayesian Network Structure Learning
with Constraint Programming [0.0]
Learning the structure of a Bayesian network (BNSL) from discrete data is known to be an NP-hard task.
In this work, we propose a new time algorithm for discovering a subset of all possible cluster cuts.
We show that, despite being suboptimal, they improve performance by orders of magnitude.
The resulting solver also compares favourably with GOBNILP, a state-of-the-art solver for the BNSL problem.
arXiv Detail & Related papers (2021-06-23T09:46:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.