Improved Sparse Ising Optimization
- URL: http://arxiv.org/abs/2311.09275v1
- Date: Wed, 15 Nov 2023 17:59:06 GMT
- Title: Improved Sparse Ising Optimization
- Authors: Kenneth M. Zick
- Abstract summary: This report presents new data demonstrating significantly higher performance on some longstanding benchmark problems with up to 20,000 variables.
Relative to leading reported combinations of speed and accuracy, a proof-of-concept implementation reached targets 2-4 orders of magnitude faster.
The data suggest exciting possibilities for pushing the sparse Ising performance frontier to potentially strengthen algorithm portfolios, AI toolkits and decision-making systems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sparse Ising problems can be found in application areas such as logistics,
condensed matter physics and training of deep Boltzmann networks, but can be
very difficult to tackle with high efficiency and accuracy. This report
presents new data demonstrating significantly higher performance on some
longstanding benchmark problems with up to 20,000 variables. The data come from
a new heuristic algorithm tested on the large sparse instances from the Gset
benchmark suite. Relative to leading reported combinations of speed and
accuracy (e.g., from Toshiba's Simulated Bifurcation Machine and Breakout Local
Search), a proof-of-concept implementation reached targets 2-4 orders of
magnitude faster. For two instances (G72 and G77) the new algorithm discovered
a better solution than all previously reported values. Solution bitstrings
confirming these two best solutions are provided. The data suggest exciting
possibilities for pushing the sparse Ising performance frontier to potentially
strengthen algorithm portfolios, AI toolkits and decision-making systems.
Related papers
- Learning the hub graphical Lasso model with the structured sparsity via
an efficient algorithm [1.0923877073891446]
We introduce a two-phase algorithm to estimate hub graphical models.
The proposed algorithm first generates a good initial point via a dual alternating direction method of multipliers.
It then warms a semismooth Newton (SSN) based augmented Lagrangian method (ALM) to compute a solution that is accurate enough for practical tasks.
arXiv Detail & Related papers (2023-08-17T08:24:28Z) - Fast Bayesian Optimization of Needle-in-a-Haystack Problems using
Zooming Memory-Based Initialization [73.96101108943986]
A Needle-in-a-Haystack problem arises when there is an extreme imbalance of optimum conditions relative to the size of the dataset.
We present a Zooming Memory-Based Initialization algorithm that builds on conventional Bayesian optimization principles.
arXiv Detail & Related papers (2022-08-26T23:57:41Z) - SPDY: Accurate Pruning with Speedup Guarantees [29.284147465251685]
SPDY is a new compression method which automatically determines layer-wise sparsity targets achieving a desired inference speedup.
We show that SPDY guarantees speedups while recovering higher accuracy relative to existing strategies, both for one-shot and gradual pruning scenarios.
We also extend our approach to the recently-proposed task of pruning with very little data, where we achieve the best known accuracy recovery when pruning to the GPU-supported 2:4 sparsity pattern.
arXiv Detail & Related papers (2022-01-31T10:14:31Z) - DAAS: Differentiable Architecture and Augmentation Policy Search [107.53318939844422]
This work considers the possible coupling between neural architectures and data augmentation and proposes an effective algorithm jointly searching for them.
Our approach achieves 97.91% accuracy on CIFAR-10 and 76.6% Top-1 accuracy on ImageNet dataset, showing the outstanding performance of our search algorithm.
arXiv Detail & Related papers (2021-09-30T17:15:17Z) - Are we ready for beyond-application high-volume data? The Reeds robot
perception benchmark dataset [3.781421673607643]
This paper presents a dataset, called Reeds, for research on robot perception algorithms.
The dataset aims to provide demanding benchmark opportunities for algorithms, rather than providing an environment for testing application-specific solutions.
arXiv Detail & Related papers (2021-09-16T23:21:42Z) - Learning to Hash Robustly, with Guarantees [79.68057056103014]
In this paper, we design an NNS algorithm for the Hamming space that has worst-case guarantees essentially matching that of theoretical algorithms.
We evaluate the algorithm's ability to optimize for a given dataset both theoretically and practically.
Our algorithm has a 1.8x and 2.1x better recall on the worst-performing queries to the MNIST and ImageNet datasets.
arXiv Detail & Related papers (2021-08-11T20:21:30Z) - Multi-objective Recurrent Neural Networks Optimization for the Edge -- a
Quantization-based Approach [2.1987431057890467]
This article introduces a Multi-Objective Hardware-Aware Quantization (MOHAQ) method, which considers both hardware efficiency and inference error as objectives for mixed-precision quantization.
We propose a search technique named "beacon-based search" to retrain selected solutions only in search space and use them as beacons to know effect of retraining on other solutions.
arXiv Detail & Related papers (2021-08-02T22:09:12Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Learning to Accelerate Heuristic Searching for Large-Scale Maximum
Weighted b-Matching Problems in Online Advertising [51.97494906131859]
Bipartite b-matching is fundamental in algorithm design, and has been widely applied into economic markets, labor markets, etc.
Existing exact and approximate algorithms usually fail in such settings due to either requiring intolerable running time or too much computation resource.
We propose textttNeuSearcher which leverages the knowledge learned from previously instances to solve new problem instances.
arXiv Detail & Related papers (2020-05-09T02:48:23Z) - Safeguarded Learned Convex Optimization [106.81731132086851]
Analytic optimization algorithms can be hand-designed to provably solve problems in an iterative fashion.
Data-driven algorithms can "learn to optimize" (L2O) with much fewer iterations and similar cost per iteration as general-purpose optimization algorithms.
We present a Safe-L2O framework to fuse the advantages of these approaches.
arXiv Detail & Related papers (2020-03-04T04:01:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.