The Boundaries of Tractability in Hierarchical Task Network Planning
- URL: http://arxiv.org/abs/2401.14174v1
- Date: Thu, 25 Jan 2024 13:34:33 GMT
- Title: The Boundaries of Tractability in Hierarchical Task Network Planning
- Authors: Cornelius Brand, Robert Ganian, Fionn Mc Inerney, Simon Wietheger
- Abstract summary: We study the complexity-theoretic boundaries of tractability for three classical problems in the context of Hierarchical Task Network Planning.
We show that all three problems can be solved in time on primitive task networks of constant partial order width.
- Score: 15.926731632774768
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the complexity-theoretic boundaries of tractability for three
classical problems in the context of Hierarchical Task Network Planning: the
validation of a provided plan, whether an executable plan exists, and whether a
given state can be reached by some plan. We show that all three problems can be
solved in polynomial time on primitive task networks of constant partial order
width (and a generalization thereof), whereas for the latter two problems this
holds only under a provably necessary restriction to the state space. Next, we
obtain an algorithmic meta-theorem along with corresponding lower bounds to
identify tight conditions under which general polynomial-time solvability
results can be lifted from primitive to general task networks. Finally, we
enrich our investigation by analyzing the parameterized complexity of the three
considered problems, and show that (1) fixed-parameter tractability for all
three problems can be achieved by replacing the partial order width with the
vertex cover number of the network as the parameter, and (2) other classical
graph-theoretic parameters of the network (including treewidth, treedepth, and
the aforementioned partial order width) do not yield fixed-parameter
tractability for any of the three problems.
Related papers
- OMEGA: Can LLMs Reason Outside the Box in Math? Evaluating Exploratory, Compositional, and Transformative Generalization [88.76091817642963]
Recent large-scale language models (LLMs) with long Chain-of-such reasoning as DeepSeek-R1-have achieved impressive results on Olympiad-level mathematics.<n>We introduce OMEGA-Out-of-distribution Math Problems Evaluation with 3 Generalization Axes-a benchmark designed to evaluate three axes of out-of-distribution generalization.
arXiv Detail & Related papers (2025-06-23T17:51:40Z) - Propositional Logic for Probing Generalization in Neural Networks [3.6037930269014633]
We investigate the generalization behavior of three key neural architectures (Transformers, Graph Convolution Networks and LSTMs) in a controlled task rooted in propositional logic.<n>We find thatTransformers fail to apply negation compositionally, unless structural biases are introduced.<n>Our findings highlight persistent limitations in the ability of standard architectures to learn systematic representations of logical operators.
arXiv Detail & Related papers (2025-06-10T16:46:05Z) - Decomposability-Guaranteed Cooperative Coevolution for Large-Scale Itinerary Planning [6.565536870180592]
Large-scale itinerary planning is a variant of the traveling salesman problem.<n>This paper analyzes the decomposability of large-scale itinerary planning.<n>We propose a novel multi-objective cooperative coevolutionary algorithm for large-scale itinerary planning.
arXiv Detail & Related papers (2025-06-06T14:31:57Z) - Subfunction Structure Matters: A New Perspective on Local Optima Networks [0.0]
Local optima networks (LONs) capture fitness information landscape.<n>We consider how LON analysis can be improved by incorporating subfunction-based information.
arXiv Detail & Related papers (2025-04-17T07:31:11Z) - PlanGEN: A Multi-Agent Framework for Generating Planning and Reasoning Trajectories for Complex Problem Solving [89.60370366013142]
We propose PlanGEN, a model-agnostic and easily scalable agent framework with three key components: constraint, verification, and selection agents.
Specifically, our approach proposes constraint-guided iterative verification to enhance performance of inference-time algorithms.
arXiv Detail & Related papers (2025-02-22T06:21:56Z) - Exploring the loss landscape of regularized neural networks via convex duality [42.48510370193192]
We discuss several aspects of the loss landscape of regularized neural networks.
We first characterize the solution set of the convex problem using its dual and further characterize all stationary points.
We show that the solution set characterization and connectivity results can be extended to different architectures.
arXiv Detail & Related papers (2024-11-12T11:41:38Z) - Defining Neural Network Architecture through Polytope Structures of Dataset [53.512432492636236]
This paper defines upper and lower bounds for neural network widths, which are informed by the polytope structure of the dataset in question.
We develop an algorithm to investigate a converse situation where the polytope structure of a dataset can be inferred from its corresponding trained neural networks.
It is established that popular datasets such as MNIST, Fashion-MNIST, and CIFAR10 can be efficiently encapsulated using no more than two polytopes with a small number of faces.
arXiv Detail & Related papers (2024-02-04T08:57:42Z) - The Complexity of Optimizing Atomic Congestion [14.845310803203724]
Atomic congestion games are a classic topic in network design, routing, and algorithmic game theory.
We show that the problem remains highly intractable even on extremely simple networks.
We conclude by extending our analysis towards the (even more challenging) min-max variant of the problem.
arXiv Detail & Related papers (2023-12-15T21:31:30Z) - A Structural Complexity Analysis of Synchronous Dynamical Systems [33.788917274180484]
We study the three most notable problems in synchronous dynamic systems.
All three problems were known to be intractable in the classical sense.
We show that all three problems remain intractable even on instances of constant treewidth.
arXiv Detail & Related papers (2023-12-12T10:49:33Z) - What Planning Problems Can A Relational Neural Network Solve? [91.53684831950612]
We present a circuit complexity analysis for relational neural networks representing policies for planning problems.
We show that there are three general classes of planning problems, in terms of the growth of circuit width and depth.
We also illustrate the utility of this analysis for designing neural networks for policy learning.
arXiv Detail & Related papers (2023-12-06T18:47:28Z) - Data Topology-Dependent Upper Bounds of Neural Network Widths [52.58441144171022]
We first show that a three-layer neural network can be designed to approximate an indicator function over a compact set.
This is then extended to a simplicial complex, deriving width upper bounds based on its topological structure.
We prove the universal approximation property of three-layer ReLU networks using our topological approach.
arXiv Detail & Related papers (2023-05-25T14:17:15Z) - The Sample Complexity of One-Hidden-Layer Neural Networks [57.6421258363243]
We study a class of scalar-valued one-hidden-layer networks, and inputs bounded in Euclidean norm.
We prove that controlling the spectral norm of the hidden layer weight matrix is insufficient to get uniform convergence guarantees.
We analyze two important settings where a mere spectral norm control turns out to be sufficient.
arXiv Detail & Related papers (2022-02-13T07:12:02Z) - Adaptive Discretization in Online Reinforcement Learning [9.560980936110234]
Two major questions in designing discretization-based algorithms are how to create the discretization and when to refine it.
We provide a unified theoretical analysis of tree-based hierarchical partitioning methods for online reinforcement learning.
Our algorithms are easily adapted to operating constraints, and our theory provides explicit bounds across each of the three facets.
arXiv Detail & Related papers (2021-10-29T15:06:15Z) - Path Regularization: A Convexity and Sparsity Inducing Regularization
for Parallel ReLU Networks [75.33431791218302]
We study the training problem of deep neural networks and introduce an analytic approach to unveil hidden convexity in the optimization landscape.
We consider a deep parallel ReLU network architecture, which also includes standard deep networks and ResNets as its special cases.
arXiv Detail & Related papers (2021-10-18T18:00:36Z) - Computing Complexity-aware Plans Using Kolmogorov Complexity [0.9137554315375922]
We introduce complexity-aware planning for finite-horizon deterministic finite automata with rewards as outputs.
We present two algorithms obtaining low-complexity policies, where the first algorithm obtains a low-complexity optimal policy.
We evaluate the algorithms on a simple navigation task for a mobile robot, where our algorithms yield low-complexity policies that concur with intuition.
arXiv Detail & Related papers (2021-09-21T16:25:04Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Global Convergence of Three-layer Neural Networks in the Mean Field
Regime [3.553493344868413]
In the mean field regime, neural networks are appropriately scaled so that as the width tends to infinity, the learning dynamics tends to a nonlinear and nontrivial dynamical limit, known as the mean field limit.
Recent works have successfully applied such analysis to two-layer networks and provided global convergence guarantees.
We prove a global convergence result for unregularized feedforward three-layer networks in the mean field regime.
arXiv Detail & Related papers (2021-05-11T17:45:42Z) - Bounding the Complexity of Formally Verifying Neural Networks: A
Geometric Approach [1.9493449206135296]
We consider formally verifying the complexity of ReLU Neural Networks (NNs)
In this paper, we show that for two different NNs, the verification problem satisfies two different types of constraints.
Both algorithms efficiently translate the NN parameters into the effect of the NN's architecture by means of hyperplanes.
arXiv Detail & Related papers (2020-12-22T00:29:54Z) - Automated Search for Resource-Efficient Branched Multi-Task Networks [81.48051635183916]
We propose a principled approach, rooted in differentiable neural architecture search, to automatically define branching structures in a multi-task neural network.
We show that our approach consistently finds high-performing branching structures within limited resource budgets.
arXiv Detail & Related papers (2020-08-24T09:49:19Z) - An Integer Linear Programming Framework for Mining Constraints from Data [81.60135973848125]
We present a general framework for mining constraints from data.
In particular, we consider the inference in structured output prediction as an integer linear programming (ILP) problem.
We show that our approach can learn to solve 9x9 Sudoku puzzles and minimal spanning tree problems from examples without providing the underlying rules.
arXiv Detail & Related papers (2020-06-18T20:09:53Z) - Beyond Worst-Case Analysis in Stochastic Approximation: Moment
Estimation Improves Instance Complexity [58.70807593332932]
We study oracle complexity of gradient based methods for approximation problems.
We focus on instance-dependent complexity instead of worst case complexity.
Our proposed algorithm and its analysis provide a theoretical justification for the success of moment estimation.
arXiv Detail & Related papers (2020-06-08T09:25:47Z) - Convex Geometry and Duality of Over-parameterized Neural Networks [70.15611146583068]
We develop a convex analytic approach to analyze finite width two-layer ReLU networks.
We show that an optimal solution to the regularized training problem can be characterized as extreme points of a convex set.
In higher dimensions, we show that the training problem can be cast as a finite dimensional convex problem with infinitely many constraints.
arXiv Detail & Related papers (2020-02-25T23:05:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.