The (Un)Scalability of Heuristic Approximators for NP-Hard Search
Problems
- URL: http://arxiv.org/abs/2209.03393v1
- Date: Wed, 7 Sep 2022 18:02:02 GMT
- Title: The (Un)Scalability of Heuristic Approximators for NP-Hard Search
Problems
- Authors: Sumedh Pendurkar, Taoan Huang, Sven Koenig, Guni Sharon
- Abstract summary: The A* algorithm is commonly used to solve NP-hard optimization problems.
We show that accurate approximation for many such problems is also NP-hard.
- Score: 25.641418039598587
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The A* algorithm is commonly used to solve NP-hard combinatorial optimization
problems. When provided with an accurate heuristic function, A* can solve such
problems in time complexity that is polynomial in the solution depth. This fact
implies that accurate heuristic approximation for many such problems is also
NP-hard. In this context, we examine a line of recent publications that propose
the use of deep neural networks for heuristic approximation. We assert that
these works suffer from inherent scalability limitations since -- under the
assumption that P$\ne$NP -- such approaches result in either (a) network sizes
that scale exponentially in the instance sizes or (b) heuristic approximation
accuracy that scales inversely with the instance sizes. Our claim is supported
by experimental results for three representative NP-hard search problems that
show that fitting deep neural networks accurately to heuristic functions
necessitates network sizes that scale exponentially with the instance size.
Related papers
- Sum-of-Squares inspired Quantum Metaheuristic for Polynomial Optimization with the Hadamard Test and Approximate Amplitude Constraints [76.53316706600717]
Recently proposed quantum algorithm arXiv:2206.14999 is based on semidefinite programming (SDP)
We generalize the SDP-inspired quantum algorithm to sum-of-squares.
Our results show that our algorithm is suitable for large problems and approximate the best known classicals.
arXiv Detail & Related papers (2024-08-14T19:04:13Z) - SPFQ: A Stochastic Algorithm and Its Error Analysis for Neural Network
Quantization [5.982922468400901]
We show that it is possible to achieve error bounds equivalent to that obtained in the order of the weights of a neural layer.
We prove that it is possible to achieve full-network bounds under an infinite alphabet and minimal assumptions on the input data.
arXiv Detail & Related papers (2023-09-20T00:35:16Z) - Inability of a graph neural network heuristic to outperform greedy
algorithms in solving combinatorial optimization problems like Max-Cut [0.0]
In Nature Machine Intelligence 4, 367 (2022), Schuetz et al provide a scheme to employ neural graph networks (GNN) to solve a variety of classical, NP-hard optimization problems.
It describes how the network is trained on sample instances and the resulting GNN is evaluated applying widely used techniques to determine its ability to succeed.
However, closer inspection shows that the reported results for this GNN are only minutely better than those for gradient descent and get outperformed by a greedy algorithm.
arXiv Detail & Related papers (2022-10-02T20:50:33Z) - Generalization of Neural Combinatorial Solvers Through the Lens of
Adversarial Robustness [68.97830259849086]
Most datasets only capture a simpler subproblem and likely suffer from spurious features.
We study adversarial robustness - a local generalization property - to reveal hard, model-specific instances and spurious features.
Unlike in other applications, where perturbation models are designed around subjective notions of imperceptibility, our perturbation models are efficient and sound.
Surprisingly, with such perturbations, a sufficiently expressive neural solver does not suffer from the limitations of the accuracy-robustness trade-off common in supervised learning.
arXiv Detail & Related papers (2021-10-21T07:28:11Z) - Statistically Meaningful Approximation: a Case Study on Approximating
Turing Machines with Transformers [50.85524803885483]
This work proposes a formal definition of statistically meaningful (SM) approximation which requires the approximating network to exhibit good statistical learnability.
We study SM approximation for two function classes: circuits and Turing machines.
arXiv Detail & Related papers (2021-07-28T04:28:55Z) - Proof of the Theory-to-Practice Gap in Deep Learning via Sampling
Complexity bounds for Neural Network Approximation Spaces [5.863264019032882]
We study the computational complexity of (deterministic or randomized) algorithms based on approximating or integrating functions.
One of the most important problems in this field concerns the question of whether it is possible to realize theoretically provable neural network approximation rates.
arXiv Detail & Related papers (2021-04-06T18:55:20Z) - Training Neural Networks is ER-complete [0.7251305766151019]
Given a neural network, training data, and a threshold, it was known that it is NP-hard for the neural network such that the total error is below the threshold.
We determine this fundamental problem precisely, by showing that it is ER-complete.
This means that it is equivalent to deciding whether a system of ER equations and inequalities with integer coefficients and real unknowns has a solution.
arXiv Detail & Related papers (2021-02-19T08:28:37Z) - Size and Depth Separation in Approximating Natural Functions with Neural
Networks [52.73592689730044]
We show the benefits of size and depth for approximation of natural functions with ReLU networks.
We show a complexity-theoretic barrier to proving such results beyond size $O(d)$.
We also show an explicit natural function, that can be approximated with networks of size $O(d)$.
arXiv Detail & Related papers (2021-01-30T21:30:11Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Learning to Accelerate Heuristic Searching for Large-Scale Maximum
Weighted b-Matching Problems in Online Advertising [51.97494906131859]
Bipartite b-matching is fundamental in algorithm design, and has been widely applied into economic markets, labor markets, etc.
Existing exact and approximate algorithms usually fail in such settings due to either requiring intolerable running time or too much computation resource.
We propose textttNeuSearcher which leverages the knowledge learned from previously instances to solve new problem instances.
arXiv Detail & Related papers (2020-05-09T02:48:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.