Generating gradients in the energy landscape using rectified linear type
cost functions for efficiently solving 0/1 matrix factorization in Simulated
Annealing
- URL: http://arxiv.org/abs/2312.17272v1
- Date: Wed, 27 Dec 2023 04:19:47 GMT
- Title: Generating gradients in the energy landscape using rectified linear type
cost functions for efficiently solving 0/1 matrix factorization in Simulated
Annealing
- Authors: Makiko Konoshima, Hirotaka Tamura, and Yoshiyuki Kabashima
- Abstract summary: We propose a method to facilitate the solution process by applying a gradient to the energy landscape.
We also propose a method to quickly obtain a solution by updating the cost function's gradient during the search process.
- Score: 7.339479909020814
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The 0/1 matrix factorization defines matrix products using logical AND and OR
as product-sum operators, revealing the factors influencing various decision
processes. Instances and their characteristics are arranged in rows and
columns. Formulating matrix factorization as an energy minimization problem and
exploring it with Simulated Annealing (SA) theoretically enables finding a
minimum solution in sufficient time. However, searching for the optimal
solution in practical time becomes problematic when the energy landscape has
many plateaus with flat slopes. In this work, we propose a method to facilitate
the solution process by applying a gradient to the energy landscape, using a
rectified linear type cost function readily available in modern annealing
machines. We also propose a method to quickly obtain a solution by updating the
cost function's gradient during the search process. Numerical experiments were
conducted, confirming the method's effectiveness with both noise-free
artificial and real data.
Related papers
- Matrix Completion with Convex Optimization and Column Subset Selection [0.0]
We present two algorithms that implement our Columns Selected Matrix Completion (CSMC) method.
To study the influence of the matrix size, rank computation, and the proportion of missing elements on the quality of the solution and the time, we performed experiments on synthetic data.
Our thorough analysis shows that CSMC provides solutions of comparable quality to matrix completion algorithms, which are based on convex optimization.
arXiv Detail & Related papers (2024-03-04T10:36:06Z) - Constrained Optimization via Exact Augmented Lagrangian and Randomized
Iterative Sketching [55.28394191394675]
We develop an adaptive inexact Newton method for equality-constrained nonlinear, nonIBS optimization problems.
We demonstrate the superior performance of our method on benchmark nonlinear problems, constrained logistic regression with data from LVM, and a PDE-constrained problem.
arXiv Detail & Related papers (2023-05-28T06:33:37Z) - Neural incomplete factorization: learning preconditioners for the conjugate gradient method [2.899792823251184]
We develop a data-driven approach to accelerate the generation of effective preconditioners.
We replace the typically hand-engineered preconditioners by the output of graph neural networks.
Our method generates an incomplete factorization of the matrix and is, therefore, referred to as neural incomplete factorization (NeuralIF)
arXiv Detail & Related papers (2023-05-25T11:45:46Z) - An Accelerated Doubly Stochastic Gradient Method with Faster Explicit
Model Identification [97.28167655721766]
We propose a novel doubly accelerated gradient descent (ADSGD) method for sparsity regularized loss minimization problems.
We first prove that ADSGD can achieve a linear convergence rate and lower overall computational complexity.
arXiv Detail & Related papers (2022-08-11T22:27:22Z) - Nonstationary Temporal Matrix Factorization for Multivariate Time Series
Forecasting [18.910448998549185]
Nonstationary Temporal Matrix Factorization (NoTMF) model is used to reconstruct the whole time series matrix and vector autoregressive process is imposed on a properly differenced copy of the temporal factor matrix.
We demonstrate the superior accuracy and effectiveness of NoTMF over other baseline models.
Our results also confirm the importance of addressing the nonstationarity of real-world time series data such as Uber traffic flow/speed.
arXiv Detail & Related papers (2022-03-20T21:22:39Z) - Extension of Dynamic Mode Decomposition for dynamic systems with
incomplete information based on t-model of optimal prediction [69.81996031777717]
The Dynamic Mode Decomposition has proved to be a very efficient technique to study dynamic data.
The application of this approach becomes problematic if the available data is incomplete because some dimensions of smaller scale either missing or unmeasured.
We consider a first-order approximation of the Mori-Zwanzig decomposition, state the corresponding optimization problem and solve it with the gradient-based optimization method.
arXiv Detail & Related papers (2022-02-23T11:23:59Z) - Barzilai and Borwein conjugate gradient method equipped with a
non-monotone line search technique and its application on non-negative matrix
factorization [1.6058099298620423]
We first modify the non-monotone line search method by introducing a new trigonometric function to calculate the non-monotone parameter.
Under some suitable assumptions, we prove that the new algorithm has the global convergence property.
The efficiency and effectiveness of the proposed method are determined in practice by applying the algorithm to some standard test problems and non-negative matrix factorization problems.
arXiv Detail & Related papers (2021-09-13T03:35:36Z) - Learning Linearized Assignment Flows for Image Labeling [70.540936204654]
We introduce a novel algorithm for estimating optimal parameters of linearized assignment flows for image labeling.
We show how to efficiently evaluate this formula using a Krylov subspace and a low-rank approximation.
arXiv Detail & Related papers (2021-08-02T13:38:09Z) - Converting ADMM to a Proximal Gradient for Convex Optimization Problems [4.56877715768796]
In sparse estimation, such as fused lasso and convex clustering, we apply either the proximal gradient method or the alternating direction method of multipliers (ADMM) to solve the problem.
This paper proposes a general method for converting the ADMM solution to the proximal gradient method, assuming that the constraints and objectives are strongly convex.
We show by numerical experiments that we can obtain a significant improvement in terms of efficiency.
arXiv Detail & Related papers (2021-04-22T07:41:12Z) - Solving weakly supervised regression problem using low-rank manifold
regularization [77.34726150561087]
We solve a weakly supervised regression problem.
Under "weakly" we understand that for some training points the labels are known, for some unknown, and for others uncertain due to the presence of random noise or other reasons such as lack of resources.
In the numerical section, we applied the suggested method to artificial and real datasets using Monte-Carlo modeling.
arXiv Detail & Related papers (2021-04-13T23:21:01Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.