Incremental Updates of Generalized Hypertree Decompositions
- URL: http://arxiv.org/abs/2209.10375v1
- Date: Wed, 21 Sep 2022 14:12:16 GMT
- Title: Incremental Updates of Generalized Hypertree Decompositions
- Authors: Georg Gottlob, Matthias Lanzinger, Davide Mario Longo, Cem Okulmus
- Abstract summary: We make the first steps toward solving the problem of updating the decomposition of a CSP $P'$ so that it becomes a valid decomposition of a new CSP $P'$ produced by some modification of $P'$.
Even though the problem is hard in theory, we propose and implement a framework for effectively updating GHDs.
- Score: 7.082768797784365
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Structural decomposition methods, such as generalized hypertree
decompositions, have been successfully used for solving constraint satisfaction
problems (CSPs). As decompositions can be reused to solve CSPs with the same
constraint scopes, investing resources in computing good decompositions is
beneficial, even though the computation itself is hard. Unfortunately, current
methods need to compute a completely new decomposition even if the scopes
change only slightly. In this paper, we make the first steps toward solving the
problem of updating the decomposition of a CSP $P$ so that it becomes a valid
decomposition of a new CSP $P'$ produced by some modification of $P$. Even
though the problem is hard in theory, we propose and implement a framework for
effectively updating GHDs. The experimental evaluation of our algorithm
strongly suggests practical applicability.
Related papers
- Global Convergence of Decentralized Retraction-Free Optimization on the Stiefel Manifold [12.414718831844041]
We show that DRFGT performs retraction on a gradient based on the corresponding DRFGT method.
Also show that DRFGT can be used to perform retraction on a network of agents.
arXiv Detail & Related papers (2024-05-19T15:50:57Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Universal Online Learning with Gradient Variations: A Multi-layer Online Ensemble Approach [57.92727189589498]
We propose an online convex optimization approach with two different levels of adaptivity.
We obtain $mathcalO(log V_T)$, $mathcalO(d log V_T)$ and $hatmathcalO(sqrtV_T)$ regret bounds for strongly convex, exp-concave and convex loss functions.
arXiv Detail & Related papers (2023-07-17T09:55:35Z) - Cooperative Coevolution for Non-Separable Large-Scale Black-Box Optimization: Convergence Analyses and Distributed Accelerations [13.750841199401613]
We analyze and extend the large-scale version of the well-known cooperative coevolution (CC) on non-separable functions.
We reveal empirical reasons of when decomposition-based methods are preferred or not in practice on some non-separable large-scale problems.
We use powerful distributed computing to accelerate it under the recent multi-level learning framework.
arXiv Detail & Related papers (2023-04-11T07:15:37Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Maximum-Likelihood Inverse Reinforcement Learning with Finite-Time
Guarantees [56.848265937921354]
Inverse reinforcement learning (IRL) aims to recover the reward function and the associated optimal policy.
Many algorithms for IRL have an inherently nested structure.
We develop a novel single-loop algorithm for IRL that does not compromise reward estimation accuracy.
arXiv Detail & Related papers (2022-10-04T17:13:45Z) - A Screening Strategy for Structured Optimization Involving Nonconvex
$\ell_{q,p}$ Regularization [5.522202658637732]
We develop a rule strategy to improve the computational efficiency in solving structured optimization involving nonell_qp$ regularization.
We prove that our rule can remove all inactive variables in a finite number of iterations of the IRL1 method.
Numerical experiments illustrate the efficiency of our screening rule strategy compared with several state-of-the-art algorithms.
arXiv Detail & Related papers (2022-08-02T10:01:49Z) - Planning and Learning with Adaptive Lookahead [74.39132848733847]
Policy Iteration (PI) algorithm alternates between greedy one-step policy improvement and policy evaluation.
Recent literature shows that multi-step lookahead policy improvement leads to a better convergence rate at the expense of increased complexity per iteration.
We propose for the first time to dynamically adapt the multi-step lookahead horizon as a function of the state and of the value estimate.
arXiv Detail & Related papers (2022-01-28T20:26:55Z) - Correcting Momentum with Second-order Information [50.992629498861724]
We develop a new algorithm for non-critical optimization that finds an $O(epsilon)$epsilon point in the optimal product.
We validate our results on a variety of large-scale deep learning benchmarks and architectures.
arXiv Detail & Related papers (2021-03-04T19:01:20Z) - NeCPD: An Online Tensor Decomposition with Optimal Stochastic Gradient
Descent [1.0953917735844645]
We propose a new efficient decomposition algorithm named NeCPD for non- efficient problem in multi-way online data based $(N)N.
We further apply this method in the field of reallife monitoring using structural datasets.
arXiv Detail & Related papers (2020-03-18T04:44:05Z) - Testing Unsatisfiability of Constraint Satisfaction Problems via Tensor
Products [0.8528384027684192]
We show how to use the tensor decomposition to compute a proof of unsatisfiability efficiently and in parallel.
One decomposition yields proofs of unsatisfiability in half the time without sacrificing the quality.
Our method is applicable to arbitrary CSPs using the well known dual and hidden variable transformations from an arbitrary CSP to a binary CSP.
arXiv Detail & Related papers (2020-01-31T18:06:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.