Quick design of feasible tensor networks for constrained combinatorial optimization
- URL: http://arxiv.org/abs/2409.01699v1
- Date: Tue, 3 Sep 2024 08:36:23 GMT
- Title: Quick design of feasible tensor networks for constrained combinatorial optimization
- Authors: Hyakka Nakada, Kotaro Tanahashi, Shu Tanaka,
- Abstract summary: In recent years, tensor networks have been applied to constrained optimization problems for practical applications.
One approach is to construct tensor networks with nilpotent-matrix manipulation.
The proposed method is expected to facilitate the discovery of feasible tensor networks for constrained optimization problems.
- Score: 1.8775413720750924
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this study, we propose a new method for constrained combinatorial optimization using tensor networks. Combinatorial optimization methods employing quantum gates, such as quantum approximate optimization algorithm, have been intensively investigated. However, their limitations in errors and the number of qubits prevent them from handling large-scale combinatorial optimization problems. Alternatively, attempts have been made to solve larger-scale problems using tensor networks that can approximately simulate quantum states. In recent years, tensor networks have been applied to constrained combinatorial optimization problems for practical applications. By preparing a specific tensor network to sample states that satisfy constraints, feasible solutions can be searched for without the method of penalty functions. Previous studies have been based on profound physics, such as U(1) gauge schemes and high-dimensional lattice models. In this study, we devise to design feasible tensor networks using elementary mathematics without such a specific knowledge. One approach is to construct tensor networks with nilpotent-matrix manipulation. The second is to algebraically determine tensor parameters. For the principle verification of the proposed method, we constructed a feasible tensor network for facility location problem and conducted imaginary time evolution. We found that feasible solutions were obtained during the evolution, ultimately leading to the optimal solution. The proposed method is expected to facilitate the discovery of feasible tensor networks for constrained combinatorial optimization problems.
Related papers
- Cons-training tensor networks [2.8834278113855896]
We introduce a novel family of tensor networks, termed.
textitconstrained matrix product states (MPS)
These networks incorporate exactly arbitrary discrete linear constraints, including inequalities, into sparse block structures.
These networks are particularly tailored for modeling distributions with support strictly over the feasible space.
arXiv Detail & Related papers (2024-05-15T00:13:18Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - Symmetric Tensor Networks for Generative Modeling and Constrained
Combinatorial Optimization [72.41480594026815]
Constrained optimization problems abound in industry, from portfolio optimization to logistics.
One of the major roadblocks in solving these problems is the presence of non-trivial hard constraints which limit the valid search space.
In this work, we encode arbitrary integer-valued equality constraints of the form Ax=b, directly into U(1) symmetric networks (TNs) and leverage their applicability as quantum-inspired generative models.
arXiv Detail & Related papers (2022-11-16T18:59:54Z) - AskewSGD : An Annealed interval-constrained Optimisation method to train
Quantized Neural Networks [12.229154524476405]
We develop a new algorithm, Annealed Skewed SGD - AskewSGD - for training deep neural networks (DNNs) with quantized weights.
Unlike algorithms with active sets and feasible directions, AskewSGD avoids projections or optimization under the entire feasible set.
Experimental results show that the AskewSGD algorithm performs better than or on par with state of the art methods in classical benchmarks.
arXiv Detail & Related papers (2022-11-07T18:13:44Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - Improvements to Gradient Descent Methods for Quantum Tensor Network
Machine Learning [0.0]
We introduce a copy node' method that successfully initializes arbitrary tensor networks.
We present numerical results that show that the combination of techniques presented here produces quantum inspired tensor network models.
arXiv Detail & Related papers (2022-03-03T19:00:40Z) - Simulation Paths for Quantum Circuit Simulation with Decision Diagrams [72.03286471602073]
We study the importance of the path that is chosen when simulating quantum circuits using decision diagrams.
We propose an open-source framework that allows to investigate dedicated simulation paths.
arXiv Detail & Related papers (2022-03-01T19:00:11Z) - Joint inference and input optimization in equilibrium networks [68.63726855991052]
deep equilibrium model is a class of models that foregoes traditional network depth and instead computes the output of a network by finding the fixed point of a single nonlinear layer.
We show that there is a natural synergy between these two settings.
We demonstrate this strategy on various tasks such as training generative models while optimizing over latent codes, training models for inverse problems like denoising and inpainting, adversarial training and gradient based meta-learning.
arXiv Detail & Related papers (2021-11-25T19:59:33Z) - DeepSplit: Scalable Verification of Deep Neural Networks via Operator
Splitting [70.62923754433461]
Analyzing the worst-case performance of deep neural networks against input perturbations amounts to solving a large-scale non- optimization problem.
We propose a novel method that can directly solve a convex relaxation of the problem to high accuracy, by splitting it into smaller subproblems that often have analytical solutions.
arXiv Detail & Related papers (2021-06-16T20:43:49Z) - Tensor-Train Networks for Learning Predictive Modeling of
Multidimensional Data [0.0]
A promising strategy is based on tensor networks, which have been very successful in physical and chemical applications.
We show that the weights of a multidimensional regression model can be learned by means of tensor networks with the aim of performing a powerful compact representation.
An algorithm based on alternating least squares has been proposed for approximating the weights in TT-format with a reduction of computational power.
arXiv Detail & Related papers (2021-01-22T16:14:38Z) - Efficient and Sparse Neural Networks by Pruning Weights in a
Multiobjective Learning Approach [0.0]
We propose a multiobjective perspective on the training of neural networks by treating its prediction accuracy and the network complexity as two individual objective functions.
Preliminary numerical results on exemplary convolutional neural networks confirm that large reductions in the complexity of neural networks with neglibile loss of accuracy are possible.
arXiv Detail & Related papers (2020-08-31T13:28:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.