AutoSTL: Automated Spatio-Temporal Multi-Task Learning
- URL: http://arxiv.org/abs/2304.09174v1
- Date: Sun, 16 Apr 2023 10:03:05 GMT
- Title: AutoSTL: Automated Spatio-Temporal Multi-Task Learning
- Authors: Zijian Zhang, Xiangyu Zhao, Hao Miao, Chunxu Zhang, Hongwei Zhao and
Junbo Zhang
- Abstract summary: We propose a scalable architecture consisting of advanced-temporal operations to exploit the dependency between tasks.
Our model automatically allocates operations the intrinsic fusion weight.
As we can know AutoSTL is the first automated-temporal multi-task- learning method.
- Score: 17.498339023562835
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spatio-Temporal prediction plays a critical role in smart city construction.
Jointly modeling multiple spatio-temporal tasks can further promote an
intelligent city life by integrating their inseparable relationship. However,
existing studies fail to address this joint learning problem well, which
generally solve tasks individually or a fixed task combination. The challenges
lie in the tangled relation between different properties, the demand for
supporting flexible combinations of tasks and the complex spatio-temporal
dependency. To cope with the problems above, we propose an Automated
Spatio-Temporal multi-task Learning (AutoSTL) method to handle multiple
spatio-temporal tasks jointly. Firstly, we propose a scalable architecture
consisting of advanced spatio-temporal operations to exploit the complicated
dependency. Shared modules and feature fusion mechanism are incorporated to
further capture the intrinsic relationship between tasks. Furthermore, our
model automatically allocates the operations and fusion weight. Extensive
experiments on benchmark datasets verified that our model achieves
state-of-the-art performance. As we can know, AutoSTL is the first automated
spatio-temporal multi-task learning method.
Related papers
- Get Rid of Task Isolation: A Continuous Multi-task Spatio-Temporal Learning Framework [10.33844348594636]
We argue that there is an essential to propose a Continuous Multi-task Spatiotemporal learning framework (CMuST) to empower collective urban intelligence.
CMuST reforms the urbantemporal learning from singledomain to cooperatively multi-task learning.
We establish a benchmark of three cities for multi-tasktemporal learning, and empirically demonstrate the superiority of CMuST.
arXiv Detail & Related papers (2024-10-14T14:04:36Z) - Do Large Language Models Have Compositional Ability? An Investigation into Limitations and Scalability [12.349247962800813]
Large language models (LLMs) have emerged as powerful tools for many AI problems.
They exhibit remarkable in-context learning (ICL) capabilities.
How they approach composite tasks remains an open and largely underexplored question.
arXiv Detail & Related papers (2024-07-22T15:22:34Z) - ReST meets ReAct: Self-Improvement for Multi-Step Reasoning LLM Agent [50.508669199496474]
We develop a ReAct-style LLM agent with the ability to reason and act upon external knowledge.
We refine the agent through a ReST-like method that iteratively trains on previous trajectories.
Starting from a prompted large model and after just two iterations of the algorithm, we can produce a fine-tuned small model.
arXiv Detail & Related papers (2023-12-15T18:20:15Z) - JiuZhang 2.0: A Unified Chinese Pre-trained Language Model for
Multi-task Mathematical Problem Solving [77.51817534090789]
We propose textbfJiuZhang2.0, a unified Chinese PLM specially for multi-task mathematical problem solving.
Our idea is to maintain a moderate-sized model and employ the emphcross-task knowledge sharing to improve the model capacity in a multi-task setting.
arXiv Detail & Related papers (2023-06-19T15:45:36Z) - Musketeer: Joint Training for Multi-task Vision Language Model with Task Explanation Prompts [75.75548749888029]
We present a vision-language model whose parameters are jointly trained on all tasks and fully shared among multiple heterogeneous tasks.
With a single model, Musketeer achieves results comparable to or better than strong baselines trained on single tasks, almost uniformly across multiple tasks.
arXiv Detail & Related papers (2023-05-11T17:57:49Z) - An Evolutionary Approach to Dynamic Introduction of Tasks in Large-scale
Multitask Learning Systems [4.675744559395732]
Multitask learning assumes that models capable of learning from multiple tasks can achieve better quality and efficiency via knowledge transfer.
State of the art ML models rely on high customization for each task and leverage size and data scale rather than scaling the number of tasks.
We propose an evolutionary method that can generate a large scale multitask model and can support the dynamic and continuous addition of new tasks.
arXiv Detail & Related papers (2022-05-25T13:10:47Z) - Controllable Dynamic Multi-Task Architectures [92.74372912009127]
We propose a controllable multi-task network that dynamically adjusts its architecture and weights to match the desired task preference as well as the resource constraints.
We propose a disentangled training of two hypernetworks, by exploiting task affinity and a novel branching regularized loss, to take input preferences and accordingly predict tree-structured models with adapted weights.
arXiv Detail & Related papers (2022-03-28T17:56:40Z) - Reparameterizing Convolutions for Incremental Multi-Task Learning
without Task Interference [75.95287293847697]
Two common challenges in developing multi-task models are often overlooked in literature.
First, enabling the model to be inherently incremental, continuously incorporating information from new tasks without forgetting the previously learned ones (incremental learning)
Second, eliminating adverse interactions amongst tasks, which has been shown to significantly degrade the single-task performance in a multi-task setup (task interference)
arXiv Detail & Related papers (2020-07-24T14:44:46Z) - Adversarial Continual Learning [99.56738010842301]
We propose a hybrid continual learning framework that learns a disjoint representation for task-invariant and task-specific features.
Our model combines architecture growth to prevent forgetting of task-specific skills and an experience replay approach to preserve shared skills.
arXiv Detail & Related papers (2020-03-21T02:08:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.