Lazy and Fast Greedy MAP Inference for Determinantal Point Process
- URL: http://arxiv.org/abs/2206.05947v1
- Date: Mon, 13 Jun 2022 07:33:32 GMT
- Title: Lazy and Fast Greedy MAP Inference for Determinantal Point Process
- Authors: Shinichi Hemmi, Taihei Oki, Shinsaku Sakaue, Kaito Fujii, Satoru Iwata
- Abstract summary: This paper presents how to combine the ideas of "lazy" and "fast", which have been considered incompatible in the literature.
Our lazy and fast greedy algorithm achieves almost the same time as the current best one and runs faster in practice.
- Score: 17.50810164319995
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The maximum a posteriori (MAP) inference for determinantal point processes
(DPPs) is crucial for selecting diverse items in many machine learning
applications. Although DPP MAP inference is NP-hard, the greedy algorithm often
finds high-quality solutions, and many researchers have studied its efficient
implementation. One classical and practical method is the lazy greedy
algorithm, which is applicable to general submodular function maximization,
while a recent fast greedy algorithm based on the Cholesky factorization is
more efficient for DPP MAP inference. This paper presents how to combine the
ideas of "lazy" and "fast", which have been considered incompatible in the
literature. Our lazy and fast greedy algorithm achieves almost the same time
complexity as the current best one and runs faster in practice. The idea of
"lazy + fast" is extendable to other greedy-type algorithms. We also give a
fast version of the double greedy algorithm for unconstrained DPP MAP
inference. Experiments validate the effectiveness of our acceleration ideas.
Related papers
- Optimizing Tensor Contraction Paths: A Greedy Algorithm Approach With Improved Cost Functions [1.3812010983144802]
We introduce a novel approach based on the greedy algorithm by opt_einsum that computes efficient contraction paths in less time.
With our approach, we are even able to compute paths for large problems where modern algorithms fail.
arXiv Detail & Related papers (2024-05-08T09:25:39Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - FastDiagP: An Algorithm for Parallelized Direct Diagnosis [64.65251961564606]
FastDiag is a typical direct diagnosis algorithm that supports diagnosis calculation without predetermining conflicts.
We propose a novel algorithm, so-called FastDiagP, which is based on the idea of speculative programming.
This mechanism helps to provide consistency checks with fast answers and boosts the algorithm's runtime performance.
arXiv Detail & Related papers (2023-05-11T16:26:23Z) - Practical, Provably-Correct Interactive Learning in the Realizable
Setting: The Power of True Believers [12.09273192079783]
We consider interactive learning in the realizable setting and develop a general framework to handle problems ranging from best arm identification to active classification.
We design novel computationally efficient algorithms for the realizable setting that match the minimax lower bound up to logarithmic factors.
arXiv Detail & Related papers (2021-11-09T02:33:36Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - Waypoint Planning Networks [66.72790309889432]
We propose a hybrid algorithm based on LSTMs with a local kernel - a classic algorithm such as A*, and a global kernel using a learned algorithm.
We compare WPN against A*, as well as related works including motion planning networks (MPNet) and value networks (VIN)
It is shown that WPN's search space is considerably less than A*, while being able to generate near optimal results.
arXiv Detail & Related papers (2021-05-01T18:02:01Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Faster Person Re-Identification [68.22203008760269]
We introduce a new solution for fast ReID by formulating a novel Coarse-to-Fine hashing code search strategy.
It uses shorter codes to coarsely rank broad matching similarities and longer codes to refine only a few top candidates for more accurate instance ReID.
Experimental results on 2 datasets show that our proposed method (CtF) is not only 8% more accurate but also 5x faster than contemporary hashing ReID methods.
arXiv Detail & Related papers (2020-08-16T03:02:49Z) - Performance Analysis of Meta-heuristic Algorithms for a Quadratic
Assignment Problem [6.555180412600522]
A quadratic assignment problem (QAP) is an optimization problem that belongs to the class of NP-hard ones.
Heuristics and meta-heuristics algorithm are prevalent solution methods for this problem.
This paper is one of comparative studies to apply different metaheuristic algorithms for solving the QAP.
arXiv Detail & Related papers (2020-07-29T15:02:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.