Instructing the Architecture Search for Spatial-temporal Sequence Forecasting with LLM
- URL: http://arxiv.org/abs/2503.17994v1
- Date: Sun, 23 Mar 2025 08:59:04 GMT
- Title: Instructing the Architecture Search for Spatial-temporal Sequence Forecasting with LLM
- Authors: Xin Xue, Haoyi Zhou, Tianyu Chen, Shuai Zhang, Yizhou Long, Jianxin Li,
- Abstract summary: We propose a novel NAS method for STSF based on large language models (LLMs)<n>Our method can achieve competitive effectiveness with superior efficiency against existing NAS methods for STSF.
- Score: 18.649295352998546
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Spatial-temporal sequence forecasting (STSF) is a long-standing research problem with widespread real-world applications. Neural architecture search (NAS), which automates the neural network design, has been shown effective in tackling the STSF problem. However, the existing NAS methods for STSF focus on generating architectures in a time-consuming data-driven fashion, which heavily limits their ability to use background knowledge and explore the complicated search trajectory. Large language models (LLMs) have shown remarkable ability in decision-making with comprehensive internal world knowledge, but how it could benefit NAS for STSF remains unexplored. In this paper, we propose a novel NAS method for STSF based on LLM. Instead of directly generate architectures with LLM, We inspire the LLM's capability with a multi-level enhancement mechanism. Specifically, on the step-level, we decompose the generation task into decision steps with powerful prompt engineering and inspire LLM to serve as instructor for architecture search based on its internal knowledge. On the instance-level, we utilize a one-step tuning framework to quickly evaluate the architecture instance and a memory bank to cumulate knowledge to improve LLM's search ability. On the task-level, we propose a two-stage architecture search, balancing the exploration stage and optimization stage, to reduce the possibility of being trapped in local optima. Extensive experimental results demonstrate that our method can achieve competitive effectiveness with superior efficiency against existing NAS methods for STSF.
Related papers
- Federated Neural Architecture Search with Model-Agnostic Meta Learning [7.542593703407386]
Federated Neural Architecture Search (NAS) enables collaborative search for optimal model architectures tailored to heterogeneous data to achieve higher accuracy.
We introduce FedMetaNAS, a framework that integrates meta-learning with NAS within the Federated Learning context.
We show that FedMetaNAS significantly accelerates the search process by more than 50% with higher accuracy compared to FedNAS.
arXiv Detail & Related papers (2025-04-08T21:57:40Z) - R1-Searcher: Incentivizing the Search Capability in LLMs via Reinforcement Learning [87.30285670315334]
textbfR1-Searcher is a novel two-stage outcome-based RL approach designed to enhance the search capabilities of Large Language Models.
Our framework relies exclusively on RL, without requiring process rewards or distillation for a cold start.
Our experiments demonstrate that our method significantly outperforms previous strong RAG methods, even when compared to the closed-source GPT-4o-mini.
arXiv Detail & Related papers (2025-03-07T17:14:44Z) - SEKI: Self-Evolution and Knowledge Inspiration based Neural Architecture Search via Large Language Models [11.670056503731905]
We introduce SEKI, a novel large language model (LLM)-based neural architecture search (NAS) method.<n>Inspired by the chain-of-thought (CoT) paradigm in modern LLMs, SEKI operates in two key stages: self-evolution and knowledge distillation.
arXiv Detail & Related papers (2025-02-27T09:17:49Z) - NADER: Neural Architecture Design via Multi-Agent Collaboration [37.48197934228379]
We introduce NADER, a novel framework that formulates neural architecture design (NAD) as a multi-agent collaboration problem.<n>We propose the Reflector, which effectively learns from immediate feedback and long-term experiences.<n>Unlike previous LLM-based methods that use code to represent neural architectures, we utilize a graph-based representation.
arXiv Detail & Related papers (2024-12-26T13:07:03Z) - Design Principle Transfer in Neural Architecture Search via Large Language Models [37.004026595537006]
Transferable neural architecture search (TNAS) has been introduced to design efficient neural architectures for multiple tasks.
In TNAS, architectural knowledge accumulated in previous search processes is reused to warm up the architecture search for new tasks.
This work proposes a novel transfer paradigm, i.e., design principle transfer.
arXiv Detail & Related papers (2024-08-21T04:27:44Z) - Cognitive LLMs: Towards Integrating Cognitive Architectures and Large Language Models for Manufacturing Decision-making [51.737762570776006]
LLM-ACTR is a novel neuro-symbolic architecture that provides human-aligned and versatile decision-making.
Our framework extracts and embeds knowledge of ACT-R's internal decision-making process as latent neural representations.
Our experiments on novel Design for Manufacturing tasks show both improved task performance as well as improved grounded decision-making capability.
arXiv Detail & Related papers (2024-08-17T11:49:53Z) - Structure-aware Domain Knowledge Injection for Large Language Models [38.08691252042949]
StructTuning is a methodology to transform Large Language Models (LLMs) into domain specialists.
It significantly reduces the training corpus needs to a mere 5% while achieving an impressive 100% of traditional knowledge injection performance.
arXiv Detail & Related papers (2024-07-23T12:38:48Z) - Efficient Prompting for LLM-based Generative Internet of Things [88.84327500311464]
Large language models (LLMs) have demonstrated remarkable capacities on various tasks, and integrating the capacities of LLMs into the Internet of Things (IoT) applications has drawn much research attention recently.
Due to security concerns, many institutions avoid accessing state-of-the-art commercial LLM services, requiring the deployment and utilization of open-source LLMs in a local network setting.
We propose a LLM-based Generative IoT (GIoT) system deployed in the local network setting in this study.
arXiv Detail & Related papers (2024-06-14T19:24:00Z) - LLM Inference Unveiled: Survey and Roofline Model Insights [62.92811060490876]
Large Language Model (LLM) inference is rapidly evolving, presenting a unique blend of opportunities and challenges.
Our survey stands out from traditional literature reviews by not only summarizing the current state of research but also by introducing a framework based on roofline model.
This framework identifies the bottlenecks when deploying LLMs on hardware devices and provides a clear understanding of practical problems.
arXiv Detail & Related papers (2024-02-26T07:33:05Z) - LLMatic: Neural Architecture Search via Large Language Models and Quality Diversity Optimization [4.951599300340954]
Large Language Models (LLMs) have emerged as powerful tools capable of accomplishing a broad spectrum of tasks.
We propose using the coding abilities of LLMs to introduce meaningful variations to code defining neural networks.
By merging the code-generating abilities of LLMs with the diversity and robustness of QD solutions, we introduce textttLLMatic, a Neural Architecture Search (NAS) algorithm.
arXiv Detail & Related papers (2023-06-01T19:33:21Z) - Elastic Architecture Search for Diverse Tasks with Different Resources [87.23061200971912]
We study a new challenging problem of efficient deployment for diverse tasks with different resources, where the resource constraint and task of interest corresponding to a group of classes are dynamically specified at testing time.
Previous NAS approaches seek to design architectures for all classes simultaneously, which may not be optimal for some individual tasks.
We present a novel and general framework, called Elastic Architecture Search (EAS), permitting instant specializations at runtime for diverse tasks with various resource constraints.
arXiv Detail & Related papers (2021-08-03T00:54:27Z) - CATCH: Context-based Meta Reinforcement Learning for Transferrable
Architecture Search [102.67142711824748]
CATCH is a novel Context-bAsed meTa reinforcement learning algorithm for transferrable arChitecture searcH.
The combination of meta-learning and RL allows CATCH to efficiently adapt to new tasks while being agnostic to search spaces.
It is also capable of handling cross-domain architecture search as competitive networks on ImageNet, COCO, and Cityscapes are identified.
arXiv Detail & Related papers (2020-07-18T09:35:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.