Learning to Accelerate Heuristic Searching for Large-Scale Maximum
Weighted b-Matching Problems in Online Advertising
- URL: http://arxiv.org/abs/2005.04355v2
- Date: Tue, 12 May 2020 07:37:56 GMT
- Title: Learning to Accelerate Heuristic Searching for Large-Scale Maximum
Weighted b-Matching Problems in Online Advertising
- Authors: Xiaotian Hao, Junqi Jin, Jianye Hao, Jin Li, Weixun Wang, Yi Ma,
Zhenzhe Zheng, Han Li, Jian Xu and Kun Gai
- Abstract summary: Bipartite b-matching is fundamental in algorithm design, and has been widely applied into economic markets, labor markets, etc.
Existing exact and approximate algorithms usually fail in such settings due to either requiring intolerable running time or too much computation resource.
We propose textttNeuSearcher which leverages the knowledge learned from previously instances to solve new problem instances.
- Score: 51.97494906131859
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bipartite b-matching is fundamental in algorithm design, and has been widely
applied into economic markets, labor markets, etc. These practical problems
usually exhibit two distinct features: large-scale and dynamic, which requires
the matching algorithm to be repeatedly executed at regular intervals. However,
existing exact and approximate algorithms usually fail in such settings due to
either requiring intolerable running time or too much computation resource. To
address this issue, we propose \texttt{NeuSearcher} which leverages the
knowledge learned from previously instances to solve new problem instances.
Specifically, we design a multichannel graph neural network to predict the
threshold of the matched edges weights, by which the search region could be
significantly reduced. We further propose a parallel heuristic search algorithm
to iteratively improve the solution quality until convergence. Experiments on
both open and industrial datasets demonstrate that \texttt{NeuSearcher} can
speed up 2 to 3 times while achieving exactly the same matching solution
compared with the state-of-the-art approximation approaches.
Related papers
- Scalable network reconstruction in subquadratic time [0.0]
We present a general algorithm applicable to a broad range of reconstruction problems that significantly outperforms this quadratic baseline.
Our algorithm relies on a second neighbor search that produces the best edge candidates with high probability.
We show that our algorithm achieves a performance that is many orders of magnitude faster than the quadratic baseline.
arXiv Detail & Related papers (2024-01-02T19:00:13Z) - Faster Matchings via Learned Duals [31.61057940283721]
We combine the idea of machine-learned predictions with the idea of "starting-warm" primal-dual algorithms.
First, predicted duals may be infeasible, so we give an algorithm that efficiently maps predicted infeasible duals to nearby feasible solutions.
Second, once the duals are feasible, they may not be optimal, so we show that they can be used to quickly find an optimal solution.
arXiv Detail & Related papers (2021-07-20T21:11:09Z) - Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex
Decentralized Optimization Over Time-Varying Networks [79.16773494166644]
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network.
We design two optimal algorithms that attain these lower bounds.
We corroborate the theoretical efficiency of these algorithms by performing an experimental comparison with existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-08T15:54:44Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Resource Allocation in Multi-armed Bandit Exploration: Overcoming
Sublinear Scaling with Adaptive Parallelism [107.48538091418412]
We study exploration in multi-armed bandits when we have access to a divisible resource that can be allocated in varying amounts to arm pulls.
We focus in particular on the allocation of distributed computing resources, where we may obtain results faster by allocating more resources per pull.
arXiv Detail & Related papers (2020-10-31T18:19:29Z) - Efficient Computation of Expectations under Spanning Tree Distributions [67.71280539312536]
We propose unified algorithms for the important cases of first-order expectations and second-order expectations in edge-factored, non-projective spanning-tree models.
Our algorithms exploit a fundamental connection between gradients and expectations, which allows us to derive efficient algorithms.
arXiv Detail & Related papers (2020-08-29T14:58:26Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z) - SONIA: A Symmetric Blockwise Truncated Optimization Algorithm [2.9923891863939938]
This work presents a new algorithm for empirical risk.
The algorithm bridges the gap between first- and second-order search methods by computing a second-order search-type update in one subspace, coupled with a scaled steepest descent step in the Theoretical complement.
arXiv Detail & Related papers (2020-06-06T19:28:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.