Learning Search-Space Specific Heuristics Using Neural Networks
- URL: http://arxiv.org/abs/2306.04019v1
- Date: Tue, 6 Jun 2023 21:22:32 GMT
- Title: Learning Search-Space Specific Heuristics Using Neural Networks
- Authors: Yu Liu and Ryo Kuroiwa and Alex Fukunaga
- Abstract summary: Our system learns distance-to-goal estimators from scratch, given a single PDDL training instance.
We show that this relatively simple system can perform surprisingly well, sometimes competitive with well-known domain-independent classicals.
- Score: 13.226916009242347
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose and evaluate a system which learns a neuralnetwork heuristic
function for forward search-based, satisficing classical planning. Our system
learns distance-to-goal estimators from scratch, given a single PDDL training
instance. Training data is generated by backward regression search or by
backward search from given or guessed goal states. In domains such as the
24-puzzle where all instances share the same search space, such heuristics can
also be reused across all instances in the domain. We show that this relatively
simple system can perform surprisingly well, sometimes competitive with
well-known domain-independent heuristics.
Related papers
- ONNX-Net: Towards Universal Representations and Instant Performance Prediction for Neural Architectures [60.14199724905456]
ONNX-Bench is a benchmark consisting of a collection of neural networks in a unified format based on ONNX files.<n> ONNX-Net represents any neural architecture using natural language descriptions acting as an input to a performance predictor.<n>Experiments show strong zero-shot performance across disparate search spaces using only a small amount of pretraining samples.
arXiv Detail & Related papers (2025-10-06T15:43:36Z) - Interpreting learned search: finding a transition model and value function in an RNN that plays Sokoban [3.274397973865673]
We partially reverse-engineer a convolutional recurrent neural network (RNN) trained to play the puzzle game Sokoban.<n>Prior work found that this network solves more levels with more test-time compute.
arXiv Detail & Related papers (2025-06-11T19:36:17Z) - Inference-time Scaling of Diffusion Models through Classical Search [90.77272206228946]
We propose a general framework that orchestrates local and global search to efficiently navigate the generative space.<n>We evaluate our approach on a range of challenging domains, including planning, offline reinforcement learning, and image generation.<n>These results show that classical search provides a principled and practical foundation for inference-time scaling in diffusion models.
arXiv Detail & Related papers (2025-05-29T16:22:40Z) - BURNS: Backward Underapproximate Reachability for Neural-Feedback-Loop Systems [8.696305200911455]
We introduce an algorithm for computing underapproximate backward reachable sets of nonlinear discrete time neural feedback loops.<n>We then use the backward reachable sets to check goal-reaching properties.<n>Our work expands the class of properties that can be verified for learning-enabled systems.
arXiv Detail & Related papers (2025-05-06T15:50:43Z) - Learning a Neural Association Network for Self-supervised Multi-Object Tracking [34.07776597698471]
This paper introduces a novel framework to learn data association for multi-object tracking in a self-supervised manner.
Motivated by the fact that in real-world scenarios object motion can be usually represented by a Markov process, we present a novel expectation (EM) algorithm that trains a neural network to associate detections for tracking.
We evaluate our approach on the challenging MOT17 and MOT20 datasets and achieve state-of-the-art results in comparison to self-supervised trackers.
arXiv Detail & Related papers (2024-11-18T12:22:29Z) - A Unified Framework for Neural Computation and Learning Over Time [56.44910327178975]
Hamiltonian Learning is a novel unified framework for learning with neural networks "over time"
It is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives.
arXiv Detail & Related papers (2024-09-18T14:57:13Z) - Self-Supervised Learning for Covariance Estimation [3.04585143845864]
We propose to globally learn a neural network that will then be applied locally at inference time.
The architecture is based on the popular attention mechanism.
It can be pre-trained as a foundation model and then be repurposed for various downstream tasks, e.g., adaptive target detection in radar or hyperspectral imagery.
arXiv Detail & Related papers (2024-03-13T16:16:20Z) - Self-Regulated Neurogenesis for Online Data-Incremental Learning [9.254419196812233]
SERENA encodes each concept in a specialized network path called 'concept cell'<n>Once a concept is learned, its corresponding concept cell is frozen, effectively preventing the forgetting of previously acquired information.<n> Experimental results show that our method not only establishes new state-of-the-art results across ten benchmarks but also remarkably surpasses offline supervised batch learning performance.
arXiv Detail & Related papers (2024-03-13T13:51:12Z) - OFA$^2$: A Multi-Objective Perspective for the Once-for-All Neural
Architecture Search [79.36688444492405]
Once-for-All (OFA) is a Neural Architecture Search (NAS) framework designed to address the problem of searching efficient architectures for devices with different resources constraints.
We aim to give one step further in the search for efficiency by explicitly conceiving the search stage as a multi-objective optimization problem.
arXiv Detail & Related papers (2023-03-23T21:30:29Z) - Test-time Training for Data-efficient UCDR [22.400837122986175]
Universal Cross-domain Retrieval protocol is a pioneer in this field.
In this work, we explore the generalized retrieval problem in a data-efficient manner.
arXiv Detail & Related papers (2022-08-19T07:50:04Z) - Sampling from Pre-Images to Learn Heuristic Functions for Classical
Planning [8.000374471991247]
We introduce a new algorithm, Regression based Supervised Learning (RSL), for learning per instance Neural Network (NN) defined functions for classical planning problems.
RSL outperforms, in terms of coverage, previous classical planning NNs functions while requiring two orders of magnitude less training time.
arXiv Detail & Related papers (2022-07-07T14:42:31Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - Network Classifiers Based on Social Learning [71.86764107527812]
We propose a new way of combining independently trained classifiers over space and time.
The proposed architecture is able to improve prediction performance over time with unlabeled data.
We show that this strategy results in consistent learning with high probability, and it yields a robust structure against poorly trained classifiers.
arXiv Detail & Related papers (2020-10-23T11:18:20Z) - MS-RANAS: Multi-Scale Resource-Aware Neural Architecture Search [94.80212602202518]
We propose Multi-Scale Resource-Aware Neural Architecture Search (MS-RANAS)
We employ a one-shot architecture search approach in order to obtain a reduced search cost.
We achieve state-of-the-art results in terms of accuracy-speed trade-off.
arXiv Detail & Related papers (2020-09-29T11:56:01Z) - DC-NAS: Divide-and-Conquer Neural Architecture Search [108.57785531758076]
We present a divide-and-conquer (DC) approach to effectively and efficiently search deep neural architectures.
We achieve a $75.1%$ top-1 accuracy on the ImageNet dataset, which is higher than that of state-of-the-art methods using the same search space.
arXiv Detail & Related papers (2020-05-29T09:02:16Z) - Deep Randomized Neural Networks [12.333836441649343]
Randomized Neural Networks explore the behavior of neural systems where the majority of connections are fixed.
This chapter surveys all the major aspects regarding the design and analysis of Randomized Neural Networks.
arXiv Detail & Related papers (2020-02-27T17:57:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.