Decomposed Quadratization: Efficient QUBO Formulation for Learning Bayesian Network
- URL: http://arxiv.org/abs/2006.06926v6
- Date: Fri, 14 Feb 2025 02:26:40 GMT
- Title: Decomposed Quadratization: Efficient QUBO Formulation for Learning Bayesian Network
- Authors: Yuta Shikuri,
- Abstract summary: It is essential to minimize the number of binary variables used in the objective function.<n>We propose a QUBO formulation that offers a bit capacity advantage over conventional quadratization techniques.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Algorithms and hardware for solving quadratic unconstrained binary optimization (QUBO) problems have made significant recent progress. This advancement has focused attention on formulating combinatorial optimization problems as quadratic polynomials. To improve the performance of solving large QUBO problems, it is essential to minimize the number of binary variables used in the objective function. In this paper, we propose a QUBO formulation that offers a bit capacity advantage over conventional quadratization techniques. As a key application, this formulation significantly reduces the number of binary variables required for score-based Bayesian network structure learning. Experimental results on $16$ instances, ranging from $37$ to $223$ variables, demonstrate that our approach requires fewer binary variables than quadratization by orders of magnitude. Moreover, an annealing machine that implement our formulation have outperformed existing algorithms in score maximization.
Related papers
- A Simplification Method for Inequality Constraints in Integer Binary Encoding HOBO Formulations [0.0]
The proposed method addresses challenges associated with Quadratic Unconstrained Binary Optimization (QUBO) formulations.
By efficiently integrating constraints, the method enhances the computational efficiency and accuracy of both quantum and classical solvers.
arXiv Detail & Related papers (2025-01-16T17:06:25Z) - Provably Faster Algorithms for Bilevel Optimization via Without-Replacement Sampling [96.47086913559289]
gradient-based algorithms are widely used in bilevel optimization.
We introduce a without-replacement sampling based algorithm which achieves a faster convergence rate.
We validate our algorithms over both synthetic and real-world applications.
arXiv Detail & Related papers (2024-11-07T17:05:31Z) - Quality Scalable Quantization Methodology for Deep Learning on Edge [0.20718016474717196]
Deep Learning Architectures employ heavy computations and bulk of the computational energy is taken up by the convolution operations in the Convolutional Neural Networks.
The proposed work is to reduce the energy consumption and size of CNN for using machine learning techniques in edge computing on ubiquitous computing devices.
The experiments done on LeNet and ConvNets show an increase upto 6% of zeros and memory savings upto 82.4919% while keeping the accuracy near the state of the art.
arXiv Detail & Related papers (2024-07-15T22:00:29Z) - Adaptive Computation Modules: Granular Conditional Computation For
Efficient Inference [13.000030080938078]
computational cost of transformer models makes them inefficient in low-latency or low-power applications.
We introduce the Adaptive Computation Module (ACM), a generic module that dynamically adapts its computational load to match the estimated difficulty of the input on a per-token basis.
Our evaluation of transformer models in computer vision and speech recognition demonstrates that substituting layers with ACMs significantly reduces inference costs without degrading the downstream accuracy for a wide interval of user-defined budgets.
arXiv Detail & Related papers (2023-12-15T20:39:43Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Adaptive Federated Minimax Optimization with Lower Complexities [82.51223883622552]
We propose an efficient adaptive minimax optimization algorithm (i.e., AdaFGDA) to solve these minimax problems.
It builds our momentum-based reduced and localSGD techniques, and it flexibly incorporate various adaptive learning rates.
arXiv Detail & Related papers (2022-11-14T12:32:18Z) - A Study of Scalarisation Techniques for Multi-Objective QUBO Solving [0.0]
Quantum and quantum-inspired optimisation algorithms have shown promising performance when applied to academic benchmarks as well as real-world problems.
However, QUBO solvers are single objective solvers. To make them more efficient at solving problems with multiple objectives, a decision on how to convert such multi-objective problems to single-objective problems need to be made.
arXiv Detail & Related papers (2022-10-20T14:54:37Z) - Low-bit Shift Network for End-to-End Spoken Language Understanding [7.851607739211987]
We propose the use of power-of-two quantization, which quantizes continuous parameters into low-bit power-of-two values.
This reduces computational complexity by removing expensive multiplication operations and with the use of low-bit weights.
arXiv Detail & Related papers (2022-07-15T14:34:22Z) - Local Stochastic Bilevel Optimization with Momentum-Based Variance
Reduction [104.41634756395545]
We study Federated Bilevel Optimization problems. Specifically, we first propose the FedBiO, a deterministic gradient-based algorithm.
We show FedBiO has complexity of $O(epsilon-1.5)$.
Our algorithms show superior performances compared to other baselines in numerical experiments.
arXiv Detail & Related papers (2022-05-03T16:40:22Z) - Enhanced Bilevel Optimization via Bregman Distance [104.96004056928474]
We propose a bilevel optimization method based on Bregman Bregman functions.
We also propose an accelerated version of SBiO-BreD method (ASBiO-BreD) by using the variance-reduced technique.
arXiv Detail & Related papers (2021-07-26T16:18:43Z) - Efficient QUBO transformation for Higher Degree Pseudo Boolean Functions [0.46040036610482665]
It is useful to have a method for transforming higher degree pseudo-Boolean problems to QUBO format.
This paper improves on the existing cubic-to-quadratic transformation approach by minimizing the number of additional variables as well as penalty coefficient.
arXiv Detail & Related papers (2021-07-24T22:13:42Z) - Quantized Neural Networks via {-1, +1} Encoding Decomposition and
Acceleration [83.84684675841167]
We propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks.
We validate the effectiveness of our method on large-scale image classification, object detection, and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-18T03:11:15Z) - Optimal qubit assignment and routing via integer programming [0.22940141855172028]
We consider the problem of mapping a logical quantum circuit onto a given hardware with limited two-qubit connectivity.
We model this problem as an integer linear program, using a network flow formulation with binary variables.
We consider several cost functions: an approximation of the fidelity of the circuit, its total depth, and a measure of cross-talk.
arXiv Detail & Related papers (2021-06-11T15:02:26Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex
Decentralized Optimization Over Time-Varying Networks [79.16773494166644]
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network.
We design two optimal algorithms that attain these lower bounds.
We corroborate the theoretical efficiency of these algorithms by performing an experimental comparison with existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-08T15:54:44Z) - Memristive Stochastic Computing for Deep Learning Parameter Optimization [1.6344851071810071]
Computing (SC) is a computing paradigm that allows for the low-cost and low-power of various arithmetic operations using bit streams and digital logic.
We demonstrate that in using a 40-nm Complementary Metal Oxide Semiconductor (CMOS) process our scalable architecture occupies 1.55mm$2$ and consumes approximately 167$mu$W when optimizing parameters of a Convolutional Neural Network (CNN) while it is being trained for a character recognition task, observing no notable reduction in accuracy post-training.
arXiv Detail & Related papers (2021-03-11T07:10:32Z) - Computational Overhead of Locality Reduction in Binary Optimization
Problems [0.0]
We discuss the effects of locality reduction needed for the majority of solvers that can only accommodate 2-local (quadratic) cost functions.
Using a parallel tempering Monte Carlo solver on Microsoft Azure Quantum, we show that post reduction to a corresponding 2-local representation the problems become considerably harder to solve.
arXiv Detail & Related papers (2020-12-17T15:49:55Z) - A Hybrid Framework Using a QUBO Solver For Permutation-Based
Combinatorial Optimization [5.460573052311485]
We propose a hybrid framework to solve large-scale permutation-based problems using a high-performance quadratic unconstrained binary optimization solver.
We propose techniques to overcome the challenges in using a QUBO solver that typically comes with limited numbers of bits.
arXiv Detail & Related papers (2020-09-27T07:15:25Z) - ALF: Autoencoder-based Low-rank Filter-sharing for Efficient
Convolutional Neural Networks [63.91384986073851]
We propose the autoencoder-based low-rank filter-sharing technique technique (ALF)
ALF shows a reduction of 70% in network parameters, 61% in operations and 41% in execution time, with minimal loss in accuracy.
arXiv Detail & Related papers (2020-07-27T09:01:22Z) - WrapNet: Neural Net Inference with Ultra-Low-Resolution Arithmetic [57.07483440807549]
We propose a method that adapts neural networks to use low-resolution (8-bit) additions in the accumulators, achieving classification accuracy comparable to their 32-bit counterparts.
We demonstrate the efficacy of our approach on both software and hardware platforms.
arXiv Detail & Related papers (2020-07-26T23:18:38Z) - Parallelization Techniques for Verifying Neural Networks [52.917845265248744]
We introduce an algorithm based on the verification problem in an iterative manner and explore two partitioning strategies.
We also introduce a highly parallelizable pre-processing algorithm that uses the neuron activation phases to simplify the neural network verification problems.
arXiv Detail & Related papers (2020-04-17T20:21:47Z) - Channel Assignment in Uplink Wireless Communication using Machine
Learning Approach [54.012791474906514]
This letter investigates a channel assignment problem in uplink wireless communication systems.
Our goal is to maximize the sum rate of all users subject to integer channel assignment constraints.
Due to high computational complexity, machine learning approaches are employed to obtain computational efficient solutions.
arXiv Detail & Related papers (2020-01-12T15:54:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.