An Efficient Incremental Simple Temporal Network Data Structure for
Temporal Planning
- URL: http://arxiv.org/abs/2212.07226v2
- Date: Fri, 11 Aug 2023 13:59:47 GMT
- Title: An Efficient Incremental Simple Temporal Network Data Structure for
Temporal Planning
- Authors: Andrea Micheli
- Abstract summary: One popular technique to solve temporal planning problems consists in decoupling the causal decisions, demanding them to search, from temporal decisions, demanding them to a simple temporal network (STN) solver.
In this paper, we describe in detail how STNs are used in temporal planning, we identify a clear interface to support this use-case and we present an efficient data-structure implementing this interface that is both time- and memory-efficient.
We show that our data structure, called deltastn, is superior to other state-of-the-art approaches on temporal planning sequences of problems.
- Score: 7.835452825434851
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: One popular technique to solve temporal planning problems consists in
decoupling the causal decisions, demanding them to heuristic search, from
temporal decisions, demanding them to a simple temporal network (STN) solver.
In this architecture, one needs to check the consistency of a series of STNs
that are related one another, therefore having methods to incrementally re-use
previous computations and that avoid expensive memory duplication is of
paramount importance. In this paper, we describe in detail how STNs are used in
temporal planning, we identify a clear interface to support this use-case and
we present an efficient data-structure implementing this interface that is both
time- and memory-efficient. We show that our data structure, called \deltastn,
is superior to other state-of-the-art approaches on temporal planning sequences
of problems.
Related papers
- Decision Trees That Remember: Gradient-Based Learning of Recurrent Decision Trees with Memory [1.4487264853431878]
We introduce ReMeDe Trees, a novel recurrent DT architecture that integrates an internal memory mechanism, similar to RNNs, to learn long-term dependencies in sequential data.
Our model learns hard, axis-aligned decision rules for both output generation and state updates, optimizing them efficiently via gradient descent.
arXiv Detail & Related papers (2025-02-06T13:11:50Z) - Score-matching-based Structure Learning for Temporal Data on Networks [17.166362605356074]
Causal discovery is a crucial initial step in establishing causality from empirical data and background knowledge.
Current score-matching-based algorithms are primarily designed to analyze independent and identically distributed (i.i.d.) data.
We have developed a new parent-finding subroutine for leaf nodes in DAGs, significantly accelerating the most time-consuming part of the process: the pruning step.
arXiv Detail & Related papers (2024-12-10T12:36:35Z) - Temporal Feature Matters: A Framework for Diffusion Model Quantization [105.3033493564844]
Diffusion models rely on the time-step for the multi-round denoising.
We introduce a novel quantization framework that includes three strategies.
This framework preserves most of the temporal information and ensures high-quality end-to-end generation.
arXiv Detail & Related papers (2024-07-28T17:46:15Z) - Temporal-aware Hierarchical Mask Classification for Video Semantic
Segmentation [62.275143240798236]
Video semantic segmentation dataset has limited categories per video.
Less than 10% of queries could be matched to receive meaningful gradient updates during VSS training.
Our method achieves state-of-the-art performance on the latest challenging VSS benchmark VSPW without bells and whistles.
arXiv Detail & Related papers (2023-09-14T20:31:06Z) - Self-Supervised Temporal Graph learning with Temporal and Structural Intensity Alignment [53.72873672076391]
Temporal graph learning aims to generate high-quality representations for graph-based tasks with dynamic information.
We propose a self-supervised method called S2T for temporal graph learning, which extracts both temporal and structural information.
S2T achieves at most 10.13% performance improvement compared with the state-of-the-art competitors on several datasets.
arXiv Detail & Related papers (2023-02-15T06:36:04Z) - HUSP-SP: Faster Utility Mining on Sequence Data [48.0426095077918]
High-utility sequential pattern mining (HUSPM) has emerged as an important topic due to its wide application and considerable popularity.
We design a compact structure called sequence projection (seqPro) and propose an efficient algorithm, namely discovering high-utility sequential patterns with the seqPro structure (HUSP-SP)
Experimental results on both synthetic and real-life datasets show that HUSP-SP can significantly outperform the state-of-the-art algorithms in terms of running time, memory usage, search space pruning efficiency, and scalability.
arXiv Detail & Related papers (2022-12-29T10:56:17Z) - Learning Sequence Representations by Non-local Recurrent Neural Memory [61.65105481899744]
We propose a Non-local Recurrent Neural Memory (NRNM) for supervised sequence representation learning.
Our model is able to capture long-range dependencies and latent high-level features can be distilled by our model.
Our model compares favorably against other state-of-the-art methods specifically designed for each of these sequence applications.
arXiv Detail & Related papers (2022-07-20T07:26:15Z) - Scalable Motif Counting for Large-scale Temporal Graphs [25.90869257290865]
We propose a scalable parallel framework for exactly counting temporal motifs in large-scale temporal graphs.
Based on the proposed counting algorithms, we design a hierarchical parallel framework that features both inter- and intra-node parallel strategies.
Experiments on sixteen real-world temporal graph datasets demonstrate the superiority and capability of our proposed framework.
arXiv Detail & Related papers (2022-04-20T05:41:38Z) - Efficient Temporal Piecewise-Linear Numeric Planning with Lazy
Consistency Checking [4.834203844100679]
We propose a set of techniques that allow the planner to compute LP consistency checks lazily where possible.
We also propose an algorithm to perform duration-dependent goal checking more selectively.
The resultant planner is not only more efficient, but outperforms most state-of-the-art temporal-numeric and hybrid planners.
arXiv Detail & Related papers (2021-05-21T07:36:54Z) - Supporting Optimal Phase Space Reconstructions Using Neural Network
Architecture for Time Series Modeling [68.8204255655161]
We propose an artificial neural network with a mechanism to implicitly learn the phase spaces properties.
Our approach is either as competitive as or better than most state-of-the-art strategies.
arXiv Detail & Related papers (2020-06-19T21:04:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.