Initialization Method for Factorization Machine Based on Low-Rank Approximation for Constructing a Corrected Approximate Ising Model
- URL: http://arxiv.org/abs/2410.12747v2
- Date: Mon, 23 Dec 2024 14:26:23 GMT
- Title: Initialization Method for Factorization Machine Based on Low-Rank Approximation for Constructing a Corrected Approximate Ising Model
- Authors: Yuya Seki, Hyakka Nakada, Shu Tanaka,
- Abstract summary: The construction of an Ising models using an FM is applied to black-box optimization problems.
It is anticipated that the optimization performance of FMQA will be enhanced through an implementation of the warm-start method.
- Score: 1.194799054956877
- License:
- Abstract: This paper presents an initialization method that can approximate a given approximate Ising model with a high degree of accuracy using a factorization machine (FM), a machine learning model. The construction of an Ising models using an FM is applied to black-box combinatorial optimization problems using factorization machine with quantum annealing (FMQA). It is anticipated that the optimization performance of FMQA will be enhanced through an implementation of the warm-start method. Nevertheless, the optimal initialization method for leveraging the warm-start approach in FMQA remains undetermined. Consequently, the present study compares initialization methods based on random initialization and low-rank approximation, and then identifies a suitable one for use with warm-start in FMQA through numerical experiments. Furthermore, the properties of the initialization method by the low-rank approximation for the FM are analyzed using random matrix theory, demonstrating that the approximation accuracy of the proposed method is not significantly influenced by the specific Ising model under consideration. The findings of this study will facilitate advancements of research in the field of black-box combinatorial optimization through the use of Ising machines.
Related papers
- Research on short-term load forecasting model based on VMD and IPSO-ELM [0.0]
This study introduces an advanced combined forecasting method that integrates Variational Mode Decomposition (VMD) with an Improved Particle Swarm Optimization (IPSO) algorithm to optimize the Extreme Learning Machine (ELM)
Simulation results indicate that the proposed method significantly improves prediction accuracy and convergence speed compared to traditional ELM, PSO-ELM, and PSO-ELM methods.
arXiv Detail & Related papers (2024-10-04T09:20:33Z) - Maximum a Posteriori Estimation for Linear Structural Dynamics Models Using Bayesian Optimization with Rational Polynomial Chaos Expansions [0.01578888899297715]
We propose an extension to an existing sparse Bayesian learning approach for MAP estimation.
We introduce a Bayesian optimization approach, which allows to adaptively enrich the experimental design.
By combining the sparsity-inducing learning procedure with the experimental design, we effectively reduce the number of model evaluations.
arXiv Detail & Related papers (2024-08-07T06:11:37Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Hybrid Algorithm of Linear Programming Relaxation and Quantum Annealing [0.6526824510982802]
One approach involves obtaining an approximate solution using classical algorithms and refining it using quantum annealing (QA)
We propose a method that uses the simple continuous relaxation technique called linear programming (LP) relaxation.
arXiv Detail & Related papers (2023-08-21T14:53:43Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Optimization of Annealed Importance Sampling Hyperparameters [77.34726150561087]
Annealed Importance Sampling (AIS) is a popular algorithm used to estimates the intractable marginal likelihood of deep generative models.
We present a parameteric AIS process with flexible intermediary distributions and optimize the bridging distributions to use fewer number of steps for sampling.
We assess the performance of our optimized AIS for marginal likelihood estimation of deep generative models and compare it to other estimators.
arXiv Detail & Related papers (2022-09-27T07:58:25Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - A Dynamical Systems Approach for Convergence of the Bayesian EM
Algorithm [59.99439951055238]
We show how (discrete-time) Lyapunov stability theory can serve as a powerful tool to aid, or even lead, in the analysis (and potential design) of optimization algorithms that are not necessarily gradient-based.
The particular ML problem that this paper focuses on is that of parameter estimation in an incomplete-data Bayesian framework via the popular optimization algorithm known as maximum a posteriori expectation-maximization (MAP-EM)
We show that fast convergence (linear or quadratic) is achieved, which could have been difficult to unveil without our adopted S&C approach.
arXiv Detail & Related papers (2020-06-23T01:34:18Z) - Estimating Basis Functions in Massive Fields under the Spatial Mixed
Effects Model [8.528384027684194]
For massive datasets, fixed rank kriging using the Expectation-Maximization (EM) algorithm for estimation has been proposed as an alternative to the usual but computationally prohibitive kriging method.
We develop an alternative method that utilizes the Spatial Mixed Effects (SME) model, but allows for additional flexibility by estimating the range of the spatial dependence between the observations and the knots via an Alternating Expectation Conditional Maximization (AECM) algorithm.
Experiments show that our methodology improves estimation without sacrificing prediction accuracy while also minimizing the additional computational burden of extra parameter estimation.
arXiv Detail & Related papers (2020-03-12T19:36:40Z) - Efficient Debiased Evidence Estimation by Multilevel Monte Carlo
Sampling [0.0]
We propose a new optimization algorithm for Bayesian inference based multilevel Monte Carlo (MLMC) methods.
Our numerical results confirm considerable computational savings compared to the conventional estimators.
arXiv Detail & Related papers (2020-01-14T09:14:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.