Revisiting Landmarks: Learning from Previous Plans to Generalize over Problem Instances
- URL: http://arxiv.org/abs/2508.21564v1
- Date: Fri, 29 Aug 2025 12:21:44 GMT
- Title: Revisiting Landmarks: Learning from Previous Plans to Generalize over Problem Instances
- Authors: Issa Hanou, Sebastijan Dumančić, Mathijs de Weerdt,
- Abstract summary: We propose a new framework for discovering landmarks that automatically generalize across a domain.<n>Generalized landmarks capture domain information that is interpretable and useful to an automated planner.
- Score: 4.6071451559137175
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a new framework for discovering landmarks that automatically generalize across a domain. These generalized landmarks are learned from a set of solved instances and describe intermediate goals for planning problems where traditional landmark extraction algorithms fall short. Our generalized landmarks extend beyond the predicates of a domain by using state functions that are independent of the objects of a specific problem and apply to all similar objects, thus capturing repetition. Based on these functions, we construct a directed generalized landmark graph that defines the landmark progression, including loop possibilities for repetitive subplans. We show how to use this graph in a heuristic to solve new problem instances of the same domain. Our results show that the generalized landmark graphs learned from a few small instances are also effective for larger instances in the same domain. If a loop that indicates repetition is identified, we see a significant improvement in heuristic performance over the baseline. Generalized landmarks capture domain information that is interpretable and useful to an automated planner. This information can be discovered from a small set of plans for the same domain.
Related papers
- From domain-landmark graph learning to problem-landmark graph generation [0.5199765487172326]
We propose a novel approach that learns landmark relationships from multiple planning tasks of a planning domain.<n>We evaluate the precision and recallof the information found by our approach over well-known planning domains.
arXiv Detail & Related papers (2025-09-21T12:41:56Z) - Landmark-Based Node Representations for Shortest Path Distance Approximations in Random Graphs [9.290757451344671]
We study the performance of local distance-preserving node embeddings.<n>Known as landmark-based algorithms, these embeddings approximate pairwise distances by computing shortest paths from a small subset of reference nodes called landmarks.<n>Our main theoretical contribution shows that random graphs, such as Erdos-Renyi random graphs, require lower dimensions in landmark-based embeddings compared to worst-case graphs.
arXiv Detail & Related papers (2025-04-11T02:47:46Z) - Towards Graph Foundation Models: Learning Generalities Across Graphs via Task-Trees [50.78679002846741]
We propose a novel approach to cross-task generalization in graphs via task-trees.<n>We show that pretraining a graph neural network (GNN) on diverse task-trees with a reconstruction objective induces transferable knowledge.<n>This enables efficient adaptation to downstream tasks with minimal fine-tuning.
arXiv Detail & Related papers (2024-12-21T02:07:43Z) - A Schema-aware Logic Reformulation for Graph Reachability [1.360022695699485]
We propose a strategy to automatically exclude and sort certain graph paths by exploiting the higher-level conceptualization of instances.<n>The aim is to obtain a new first-order logic reformulation of the graph reachability scenario, capable of improving the traditional algorithms in terms of time, space requirements, and number of backtracks.
arXiv Detail & Related papers (2024-10-03T14:39:49Z) - Multi-Domain Long-Tailed Learning by Augmenting Disentangled
Representations [80.76164484820818]
There is an inescapable long-tailed class-imbalance issue in many real-world classification problems.
We study this multi-domain long-tailed learning problem and aim to produce a model that generalizes well across all classes and domains.
Built upon a proposed selective balanced sampling strategy, TALLY achieves this by mixing the semantic representation of one example with the domain-associated nuisances of another.
arXiv Detail & Related papers (2022-10-25T21:54:26Z) - Graph Value Iteration [35.87805182676444]
deep Reinforcement Learning (RL) has been successful in various search domains, such as two-player games and scientific discovery.
One major difficulty is that without a human-crafted function, reward signals remain zero unless the learning framework discovers any solution plan.
We propose a domain-independent method that augments graph search with graph value iteration to solve hard planning instances.
arXiv Detail & Related papers (2022-09-20T10:45:03Z) - Finding Diverse and Predictable Subgraphs for Graph Domain
Generalization [88.32356432272356]
This paper focuses on out-of-distribution generalization on graphs where performance drops due to the unseen distribution shift.
We propose a new graph domain generalization framework, dubbed as DPS, by constructing multiple populations from the source domains.
Experiments on both node-level and graph-level benchmarks shows that the proposed DPS achieves impressive performance for various graph domain generalization tasks.
arXiv Detail & Related papers (2022-06-19T07:57:56Z) - Multi-Domain Incremental Learning for Semantic Segmentation [42.30646442211311]
We propose a dynamic architecture that assigns universally shared, domain-invariant parameters to capture homogeneous semantic features.
We demonstrate the effectiveness of our proposed solution on domain incremental settings pertaining to real-world driving scenes from roads of Germany (Cityscapes), the United States (BDD100k), and India (IDD)
arXiv Detail & Related papers (2021-10-23T12:21:42Z) - Domain-Class Correlation Decomposition for Generalizable Person
Re-Identification [34.813965300584776]
In person re-identification, the domain and class are correlated.
We show that domain adversarial learning will lose certain information about class due to this domain-class correlation.
Our model outperforms the state-of-the-art methods on the large-scale domain generalization Re-ID benchmark.
arXiv Detail & Related papers (2021-06-29T09:45:03Z) - Self-supervised Graph-level Representation Learning with Local and
Global Structure [71.45196938842608]
We propose a unified framework called Local-instance and Global-semantic Learning (GraphLoG) for self-supervised whole-graph representation learning.
Besides preserving the local similarities, GraphLoG introduces the hierarchical prototypes to capture the global semantic clusters.
An efficient online expectation-maximization (EM) algorithm is further developed for learning the model.
arXiv Detail & Related papers (2021-06-08T05:25:38Z) - Batch Normalization Embeddings for Deep Domain Generalization [50.51405390150066]
Domain generalization aims at training machine learning models to perform robustly across different and unseen domains.
We show a significant increase in classification accuracy over current state-of-the-art techniques on popular domain generalization benchmarks.
arXiv Detail & Related papers (2020-11-25T12:02:57Z) - Spatial Attention Pyramid Network for Unsupervised Domain Adaptation [66.75008386980869]
Unsupervised domain adaptation is critical in various computer vision tasks.
We design a new spatial attention pyramid network for unsupervised domain adaptation.
Our method performs favorably against the state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2020-03-29T09:03:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.