HyperTensioN and Total-order Forward Decomposition optimizations
- URL: http://arxiv.org/abs/2207.00345v1
- Date: Fri, 1 Jul 2022 11:23:52 GMT
- Title: HyperTensioN and Total-order Forward Decomposition optimizations
- Authors: Maur\'icio Cec\'ilio Magnaguagno and Felipe Meneguzzi and Lavindra de
Silva
- Abstract summary: Hierarchical Task Networks (HTN) planners generate plans using a decomposition process with extra domain knowledge to guide search towards a planning task.
domain experts develop HTN descriptions, they may repeatedly describe the same preconditions, or methods that are rarely used or possible to be decomposed.
By leveraging a three-stage compiler design we can easily support more language descriptions and preprocessing optimizations that when chained can greatly improve runtime efficiency in such domains.
- Score: 26.665468404059354
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hierarchical Task Networks (HTN) planners generate plans using a
decomposition process with extra domain knowledge to guide search towards a
planning task. While domain experts develop HTN descriptions, they may
repeatedly describe the same preconditions, or methods that are rarely used or
possible to be decomposed. By leveraging a three-stage compiler design we can
easily support more language descriptions and preprocessing optimizations that
when chained can greatly improve runtime efficiency in such domains. In this
paper we evaluate such optimizations with the HyperTensioN HTN planner, used in
the HTN IPC 2020.
Related papers
- Messenger RNA Design via Expected Partition Function and Continuous
Optimization [4.53482492156538]
We develop a general framework for continuous optimization based on a generalization of classical partition function.
We consider the important problem of mRNA design with wide applications in vaccines and therapeutics.
arXiv Detail & Related papers (2023-12-29T18:37:38Z) - All-to-all reconfigurability with sparse and higher-order Ising machines [0.0]
We introduce a multiplexed architecture that emulates all-to-all network functionality.
We show that running the adaptive parallel tempering algorithm demonstrates competitive algorithmic and prefactor advantages.
scaled magnetic versions of p-bit IMs could lead to orders of magnitude improvements over the state of the art for generic optimization.
arXiv Detail & Related papers (2023-11-21T20:27:02Z) - Dynamic Voxel Grid Optimization for High-Fidelity RGB-D Supervised
Surface Reconstruction [130.84162691963536]
We introduce a novel dynamic grid optimization method for high-fidelity 3D surface reconstruction.
We optimize the process by dynamically modifying the grid and assigning more finer-scale voxels to regions with higher complexity.
The proposed approach is able to generate high-quality 3D reconstructions with fine details on both synthetic and real-world data.
arXiv Detail & Related papers (2023-04-12T22:39:57Z) - Advanced Scaling Methods for VNF deployment with Reinforcement Learning [0.0]
Network function virtualization (NFV) and software-defined network (SDN) have become emerging network paradigms.
reinforcement learning (RL) based approaches have been proposed to optimize VNF deployment.
In this paper, we propose an enhanced model which can be adapted to more general network settings.
arXiv Detail & Related papers (2023-01-19T21:31:23Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - An Efficient HTN to STRIPS Encoding for Concurrent Plans [0.0]
We present a new HTN to STRIPS encoding allowing to generate concurrent plans.
We show experimentally that this encoding outperforms previous approaches on hierarchical IPC benchmarks.
arXiv Detail & Related papers (2022-06-14T18:18:22Z) - Distributed stochastic optimization with large delays [59.95552973784946]
One of the most widely used methods for solving large-scale optimization problems is distributed asynchronous gradient descent (DASGD)
We show that DASGD converges to a global optimal implementation model under same delay assumptions.
arXiv Detail & Related papers (2021-07-06T21:59:49Z) - A mechanistic-based data-driven approach to accelerate structural
topology optimization through finite element convolutional neural network
(FE-CNN) [5.469226380238751]
A mechanistic data-driven approach is proposed to accelerate structural topology optimization.
Our approach can be divided into two stages: offline training, and online optimization.
Numerical examples demonstrate that this approach can accelerate optimization by up to an order of magnitude in computational time.
arXiv Detail & Related papers (2021-06-25T14:11:45Z) - Multi-Exit Semantic Segmentation Networks [78.44441236864057]
We propose a framework for converting state-of-the-art segmentation models to MESS networks.
specially trained CNNs that employ parametrised early exits along their depth to save during inference on easier samples.
We co-optimise the number, placement and architecture of the attached segmentation heads, along with the exit policy, to adapt to the device capabilities and application-specific requirements.
arXiv Detail & Related papers (2021-06-07T11:37:03Z) - A Reinforcement Learning Environment for Polyhedral Optimizations [68.8204255655161]
We propose a shape-agnostic formulation for the space of legal transformations in the polyhedral model as a Markov Decision Process (MDP)
Instead of using transformations, the formulation is based on an abstract space of possible schedules.
Our generic MDP formulation enables using reinforcement learning to learn optimization policies over a wide range of loops.
arXiv Detail & Related papers (2021-04-28T12:41:52Z) - Jump Operator Planning: Goal-Conditioned Policy Ensembles and Zero-Shot
Transfer [71.44215606325005]
We propose a novel framework called Jump-Operator Dynamic Programming for quickly computing solutions within a super-exponential space of sequential sub-goal tasks.
This approach involves controlling over an ensemble of reusable goal-conditioned polices functioning as temporally extended actions.
We then identify classes of objective functions on this subspace whose solutions are invariant to the grounding, resulting in optimal zero-shot transfer.
arXiv Detail & Related papers (2020-07-06T05:13:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.