Bandgap optimization in combinatorial graphs with tailored ground
states: Application in Quantum annealing
- URL: http://arxiv.org/abs/2102.00551v1
- Date: Sun, 31 Jan 2021 22:11:12 GMT
- Title: Bandgap optimization in combinatorial graphs with tailored ground
states: Application in Quantum annealing
- Authors: Siddhartha Srivastava and Veera Sundararaghavan
- Abstract summary: A mixed-integer linear programming (MILP) formulation is presented for parameter estimation of the Potts model.
Two algorithms are developed; the first method estimates the parameters such that the set of ground states replicate the user-prescribed data set; the second method allows the user multiplicity to prescribe the ground states.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A mixed-integer linear programming (MILP) formulation is presented for
parameter estimation of the Potts model. Two algorithms are developed; the
first method estimates the parameters such that the set of ground states
replicate the user-prescribed data set; the second method allows the user to
prescribe the ground states multiplicity. In both instances, the optimization
process ensures that the bandgap is maximized. Consequently, the model
parameter efficiently describes the user data for a broad range of
temperatures. This is useful in the development of energy-based graph models to
be simulated on Quantum annealing hardware where the exact simulation
temperature is unknown. Computationally, the memory requirement in this method
grows exponentially with the graph size. Therefore, this method can only be
practically applied to small graphs. Such applications include learning of
small generative classifiers and spin-lattice model with energy described by
Ising hamiltonian. Learning large data sets poses no extra cost to this method;
however, applications involving the learning of high dimensional data are out
of scope.
Related papers
- Computation-Aware Gaussian Processes: Model Selection And Linear-Time Inference [55.150117654242706]
We show that model selection for computation-aware GPs trained on 1.8 million data points can be done within a few hours on a single GPU.
As a result of this work, Gaussian processes can be trained on large-scale datasets without significantly compromising their ability to quantify uncertainty.
arXiv Detail & Related papers (2024-11-01T21:11:48Z) - Diffusion posterior sampling for simulation-based inference in tall data settings [53.17563688225137]
Simulation-based inference ( SBI) is capable of approximating the posterior distribution that relates input parameters to a given observation.
In this work, we consider a tall data extension in which multiple observations are available to better infer the parameters of the model.
We compare our method to recently proposed competing approaches on various numerical experiments and demonstrate its superiority in terms of numerical stability and computational cost.
arXiv Detail & Related papers (2024-04-11T09:23:36Z) - A Hybrid GNN approach for predicting node data for 3D meshes [0.0]
Currently, we predict the best parameters using the finite element method.
We introduce a hybrid approach that helps in processing and generating new data simulations.
New models have outperformed existing PointNet and simple graph neural network models when applied to produce the simulations.
arXiv Detail & Related papers (2023-10-23T08:47:27Z) - Improved Distribution Matching for Dataset Condensation [91.55972945798531]
We propose a novel dataset condensation method based on distribution matching.
Our simple yet effective method outperforms most previous optimization-oriented methods with much fewer computational resources.
arXiv Detail & Related papers (2023-07-19T04:07:33Z) - Stochastic Marginal Likelihood Gradients using Neural Tangent Kernels [78.6096486885658]
We introduce lower bounds to the linearized Laplace approximation of the marginal likelihood.
These bounds are amenable togradient-based optimization and allow to trade off estimation accuracy against computational complexity.
arXiv Detail & Related papers (2023-06-06T19:02:57Z) - Efficient Graph Laplacian Estimation by Proximal Newton [12.05527862797306]
A graph learning problem can be formulated as a maximum likelihood estimation (MLE) of the precision matrix.
We develop a second-order approach to obtain an efficient solver utilizing several algorithmic features.
arXiv Detail & Related papers (2023-02-13T15:13:22Z) - FaDIn: Fast Discretized Inference for Hawkes Processes with General
Parametric Kernels [82.53569355337586]
This work offers an efficient solution to temporal point processes inference using general parametric kernels with finite support.
The method's effectiveness is evaluated by modeling the occurrence of stimuli-induced patterns from brain signals recorded with magnetoencephalography (MEG)
Results show that the proposed approach leads to an improved estimation of pattern latency than the state-of-the-art.
arXiv Detail & Related papers (2022-10-10T12:35:02Z) - Condensing Graphs via One-Step Gradient Matching [50.07587238142548]
We propose a one-step gradient matching scheme, which performs gradient matching for only one single step without training the network weights.
Our theoretical analysis shows this strategy can generate synthetic graphs that lead to lower classification loss on real graphs.
In particular, we are able to reduce the dataset size by 90% while approximating up to 98% of the original performance.
arXiv Detail & Related papers (2022-06-15T18:20:01Z) - Manifold learning-based polynomial chaos expansions for high-dimensional
surrogate models [0.0]
We introduce a manifold learning-based method for uncertainty quantification (UQ) in describing systems.
The proposed method is able to achieve highly accurate approximations which ultimately lead to the significant acceleration of UQ tasks.
arXiv Detail & Related papers (2021-07-21T00:24:15Z) - Neuralizing Efficient Higher-order Belief Propagation [19.436520792345064]
We propose to combine approaches to learn better node and graph representations.
We derive an efficient approximate sum-product loopy belief propagation inference algorithm for higher-order PGMs.
Our model indeed captures higher-order information, substantially outperforming state-of-the-art $k$-order graph neural networks in molecular datasets.
arXiv Detail & Related papers (2020-10-19T07:51:31Z) - Doubly Sparse Variational Gaussian Processes [14.209730729425502]
We show that the inducing point framework is still valid for state space models and that it can bring further computational and memory savings.
This work makes it possible to use the state-space formulation inside deep Gaussian process models as illustrated in one of the experiments.
arXiv Detail & Related papers (2020-01-15T15:07:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.