Improved Algorithms for Allen's Interval Algebra by Dynamic Programming
with Sublinear Partitioning
- URL: http://arxiv.org/abs/2305.15950v1
- Date: Thu, 25 May 2023 11:45:12 GMT
- Title: Improved Algorithms for Allen's Interval Algebra by Dynamic Programming
with Sublinear Partitioning
- Authors: Leif Eriksson and Victor Lagerkvist
- Abstract summary: Allen's interval algebra is one of the most well-known calculi in qualitative temporal reasoning.
We propose a novel framework for solving NP-hard qualitative reasoning problems.
We obtain a major improvement of $O*((fraccnlogn)n)$ for Allen's interval algebra.
- Score: 9.594432031144716
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Allen's interval algebra is one of the most well-known calculi in qualitative
temporal reasoning with numerous applications in artificial intelligence.
Recently, there has been a surge of improvements in the fine-grained complexity
of NP-hard reasoning tasks, improving the running time from the naive
$2^{O(n^2)}$ to $O^*((1.0615n)^{n})$, with even faster algorithms for unit
intervals a bounded number of overlapping intervals (the $O^*(\cdot)$ notation
suppresses polynomial factors). Despite these improvements the best known lower
bound is still only $2^{o(n)}$ (under the exponential-time hypothesis) and
major improvements in either direction seemingly require fundamental advances
in computational complexity. In this paper we propose a novel framework for
solving NP-hard qualitative reasoning problems which we refer to as dynamic
programming with sublinear partitioning. Using this technique we obtain a major
improvement of $O^*((\frac{cn}{\log{n}})^{n})$ for Allen's interval algebra. To
demonstrate that the technique is applicable to more domains we apply it to a
problem in qualitative spatial reasoning, the cardinal direction point algebra,
and solve it in $O^*((\frac{cn}{\log{n}})^{2n/3})$ time. Hence, not only do we
significantly advance the state-of-the-art for NP-hard qualitative reasoning
problems, but obtain a novel algorithmic technique that is likely applicable to
many problems where $2^{O(n)}$ time algorithms are unlikely.
Related papers
- Obtaining Lower Query Complexities through Lightweight Zeroth-Order Proximal Gradient Algorithms [65.42376001308064]
We propose two variance reduced ZO estimators for complex gradient problems.
We improve the state-of-the-art function complexities from $mathcalOleft(minfracdn1/2epsilon2, fracdepsilon3right)$ to $tildecalOleft(fracdepsilon2right)$.
arXiv Detail & Related papers (2024-10-03T15:04:01Z) - Fast decision tree learning solves hard coding-theoretic problems [7.420043502440765]
We show that improvement of Ehrenfeucht and Haussler's algorithm will yield $O(log n)$-approximation algorithms for $k$-NCP.
This can be interpreted as a new avenue for designing algorithms for $k$-NCP, or as one for establishing the optimality of Ehrenfeucht and Haussler's algorithm.
arXiv Detail & Related papers (2024-09-19T21:40:57Z) - Second-order optimization with lazy Hessians [55.51077907483634]
We analyze Newton's lazy Hessian updates for solving general possibly non-linear optimization problems.
We reuse a previously seen Hessian iteration while computing new gradients at each step of the method.
arXiv Detail & Related papers (2022-12-01T18:58:26Z) - A Multivariate Complexity Analysis of Qualitative Reasoning Problems [9.594432031144716]
We introduce the classes FPE and XE of problems solvable in $f(k) cdot 2O(n)$, respectively $f(k)n$, time.
We show that the Partially Ordered Time problem of effective width $k$ is solvable in $16kn$ time and is thus included in XE.
We also show that the network consistency problem for Allen's interval algebra with no interval overlapping with more than $k$ others is solvable in $(2nk)2k cdot 2n$ time and is included in
arXiv Detail & Related papers (2022-09-30T07:29:53Z) - An Accelerated Stochastic Algorithm for Solving the Optimal Transport
Problem [2.1485350418225244]
A primal-dual accelerated gradient descent with variance reduction algorithm (PDASGD) is proposed to solve linear-constrained optimization problems.
PDASGD enjoys the best-known computational complexity -- $widetildemathcalO(n2/epsilon)$, where $n$ is the number of atoms, and $epsilon>0$ is the accuracy.
arXiv Detail & Related papers (2022-03-02T01:16:10Z) - Choosing the Right Algorithm With Hints From Complexity Theory [16.33500498939925]
We show that the Metropolis algorithm is clearly the best of all algorithms regarded for reasonable problem sizes.
An artificial algorithm of this type having an $O(n log n)$ runtime leads to the result that the significance-based compact genetic algorithm (sig-cGA) can solve the DLB problem in time $O(n log n)$ with high probability.
arXiv Detail & Related papers (2021-09-14T11:12:32Z) - Solving Infinite-Domain CSPs Using the Patchwork Property [17.101875753030754]
The constraint problem (CSP) has important applications in computer science and AI.
A highly successful approach is to bound the treewidth of the underlying primal graph.
We improve this bound to $f(w)cdot nO(1)$, for CSPs whose basic relations have patchwork property.
arXiv Detail & Related papers (2021-07-03T13:04:41Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - Higher-order Derivatives of Weighted Finite-state Machines [68.43084108204741]
This work examines the computation of higher-order derivatives with respect to the normalization constant for weighted finite-state machines.
We provide a general algorithm for evaluating derivatives of all orders, which has not been previously described in the literature.
Our algorithm is significantly faster than prior algorithms.
arXiv Detail & Related papers (2021-06-01T19:51:55Z) - Gradient Free Minimax Optimization: Variance Reduction and Faster
Convergence [120.9336529957224]
In this paper, we denote the non-strongly setting on the magnitude of a gradient-free minimax optimization problem.
We show that a novel zeroth-order variance reduced descent algorithm achieves the best known query complexity.
arXiv Detail & Related papers (2020-06-16T17:55:46Z) - Nonconvex Zeroth-Order Stochastic ADMM Methods with Lower Function Query
Complexity [109.54166127479093]
Zeroth-order (a.k.a, derivative-free) methods are a class of effective optimization methods for solving machine learning problems.
In this paper, we propose a class faster faster zerothorder alternating gradient method multipliers (MMADMM) to solve the non finitesum problems.
We show that ZOMMAD methods can achieve a lower function $O(frac13nfrac1)$ for finding an $epsilon$-stationary point.
At the same time, we propose a class faster zerothorder online ADM methods (M
arXiv Detail & Related papers (2019-07-30T02:21:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.