Initialization Method for Factorization Machine Based on Low-Rank Approximation for Constructing a Corrected Approximate Ising Model
- URL: http://arxiv.org/abs/2410.12747v1
- Date: Wed, 16 Oct 2024 17:06:55 GMT
- Title: Initialization Method for Factorization Machine Based on Low-Rank Approximation for Constructing a Corrected Approximate Ising Model
- Authors: Yuya Seki, Hyakka Nakada, Shu Tanaka,
- Abstract summary: An Ising model is approximated with a high degree of accuracy using the Factorization Machine (FM), a machine learning model.
It is anticipated that the optimization performance of FMQA will be enhanced through the implementation of the warm-start method.
- Score: 1.194799054956877
- License:
- Abstract: This paper presents an initialization method that can approximate a given approximate Ising model with a high degree of accuracy using the Factorization Machine (FM), a machine learning model. The construction of Ising models using FM is applied to the combinatorial optimization problem using the factorization machine with quantum annealing. It is anticipated that the optimization performance of FMQA will be enhanced through the implementation of the warm-start method. Nevertheless, the optimal initialization method for leveraging the warm-start approach in FMQA remains undetermined. Consequently, the present study compares a number of initialization methods and identifies the most appropriate for use with a warm-start in FMQA through numerical experimentation. Furthermore, the properties of the proposed FM initialization method are analyzed using random matrix theory, demonstrating that the approximation accuracy of the proposed method is not significantly influenced by the specific Ising model under consideration. The findings of this study will facilitate the advancement of combinatorial optimization problem-solving through the use of Ising machines.
Related papers
- Maximum a Posteriori Estimation for Linear Structural Dynamics Models Using Bayesian Optimization with Rational Polynomial Chaos Expansions [0.01578888899297715]
We propose an extension to an existing sparse Bayesian learning approach for MAP estimation.
We introduce a Bayesian optimization approach, which allows to adaptively enrich the experimental design.
By combining the sparsity-inducing learning procedure with the experimental design, we effectively reduce the number of model evaluations.
arXiv Detail & Related papers (2024-08-07T06:11:37Z) - Parameter Generation of Quantum Approximate Optimization Algorithm with Diffusion Model [3.6959187484738902]
Quantum computing presents a prospect for revolutionizing the field of probabilistic optimization.
We present the Quantum Approximate Optimization Algorithm (QAOA), which is a hybrid quantum-classical algorithm.
We show that the diffusion model is capable of learning the distribution of high-performing parameters and then synthesizing new parameters closer to optimal ones.
arXiv Detail & Related papers (2024-07-17T01:18:27Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Data-driven decision-focused surrogate modeling [10.1947610432159]
We introduce the concept of decision-focused surrogate modeling for solving challenging nonlinear optimization problems in real-time settings.
The proposed data-driven framework seeks to learn a simpler, e.g. convex, surrogate optimization model that is trained to minimize the decision prediction error.
We validate our framework through numerical experiments involving the optimization of common nonlinear chemical processes.
arXiv Detail & Related papers (2023-08-23T14:23:26Z) - Hybrid Algorithm of Linear Programming Relaxation and Quantum Annealing [0.6526824510982802]
One approach involves obtaining an approximate solution using classical algorithms and refining it using quantum annealing (QA)
We propose a method that uses the simple continuous relaxation technique called linear programming (LP) relaxation.
arXiv Detail & Related papers (2023-08-21T14:53:43Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Optimization of Annealed Importance Sampling Hyperparameters [77.34726150561087]
Annealed Importance Sampling (AIS) is a popular algorithm used to estimates the intractable marginal likelihood of deep generative models.
We present a parameteric AIS process with flexible intermediary distributions and optimize the bridging distributions to use fewer number of steps for sampling.
We assess the performance of our optimized AIS for marginal likelihood estimation of deep generative models and compare it to other estimators.
arXiv Detail & Related papers (2022-09-27T07:58:25Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - SimPO: Simultaneous Prediction and Optimization [3.181417685380586]
We propose a formulation for the Simultaneous Prediction and Optimization (SimPO) framework.
This framework introduces the use of a joint weighted loss of a decision-driven predictive ML model and an optimization objective function.
arXiv Detail & Related papers (2022-03-31T20:01:36Z) - Robust, Accurate Stochastic Optimization for Variational Inference [68.83746081733464]
We show that common optimization methods lead to poor variational approximations if the problem is moderately large.
Motivated by these findings, we develop a more robust and accurate optimization framework by viewing the underlying algorithm as producing a Markov chain.
arXiv Detail & Related papers (2020-09-01T19:12:11Z) - A Dynamical Systems Approach for Convergence of the Bayesian EM
Algorithm [59.99439951055238]
We show how (discrete-time) Lyapunov stability theory can serve as a powerful tool to aid, or even lead, in the analysis (and potential design) of optimization algorithms that are not necessarily gradient-based.
The particular ML problem that this paper focuses on is that of parameter estimation in an incomplete-data Bayesian framework via the popular optimization algorithm known as maximum a posteriori expectation-maximization (MAP-EM)
We show that fast convergence (linear or quadratic) is achieved, which could have been difficult to unveil without our adopted S&C approach.
arXiv Detail & Related papers (2020-06-23T01:34:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.