Dynamic Perturbed Adaptive Method for Infinite Task-Conflicting Time Series
- URL: http://arxiv.org/abs/2505.11902v1
- Date: Sat, 17 May 2025 08:33:57 GMT
- Title: Dynamic Perturbed Adaptive Method for Infinite Task-Conflicting Time Series
- Authors: Jiang You, Xiaozhen Wang, Arben Cela,
- Abstract summary: We formulate time series tasks as input-output mappings under varying objectives, where the same input may yield different outputs.<n>To study this, we construct a synthetic dataset with numerous conflicting subtasks to evaluate adaptation under frequent task shifts.<n>We propose a dynamic perturbed adaptive method based on a trunk-branch architecture, where the trunk evolves slowly to capture long-term structure.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We formulate time series tasks as input-output mappings under varying objectives, where the same input may yield different outputs. This challenges a model's generalization and adaptability. To study this, we construct a synthetic dataset with numerous conflicting subtasks to evaluate adaptation under frequent task shifts. Existing static models consistently fail in such settings. We propose a dynamic perturbed adaptive method based on a trunk-branch architecture, where the trunk evolves slowly to capture long-term structure, and branch modules are re-initialized and updated for each task. This enables continual test-time adaptation and cross-task transfer without relying on explicit task labels. Theoretically, we show that this architecture has strictly higher functional expressivity than static models and LoRA. We also establish exponential convergence of branch adaptation under the Polyak-Lojasiewicz condition. Experiments demonstrate that our method significantly outperforms competitive baselines in complex and conflicting task environments, exhibiting fast adaptation and progressive learning capabilities.
Related papers
- DMSC: Dynamic Multi-Scale Coordination Framework for Time Series Forecasting [14.176801586961286]
Time Series Forecasting (TSF) faces persistent challenges in modeling intricate temporal dependencies across different scales.<n>We propose a novel Dynamic Multi-Scale Coordination Framework (DMSC) with Multi-Scale Patch Decomposition block (EMPD), Triad Interaction Block (TIB) and Adaptive Scale Routing MoE block (ASR-MoE)<n>EMPD is designed as a built-in component to dynamically segment sequences into hierarchical patches with exponentially scaled granularities.<n>TIB then jointly models intra-patch, inter-patch, and cross-variable dependencies within each layer's decomposed representations.
arXiv Detail & Related papers (2025-08-03T13:11:52Z) - Neural Network Reprogrammability: A Unified Theme on Model Reprogramming, Prompt Tuning, and Prompt Instruction [55.914891182214475]
We introduce neural network reprogrammability as a unifying framework for model adaptation.<n>We present a taxonomy that categorizes such information manipulation approaches across four key dimensions.<n>We also analyze remaining technical challenges and ethical considerations.
arXiv Detail & Related papers (2025-06-05T05:42:27Z) - MINGLE: Mixtures of Null-Space Gated Low-Rank Experts for Test-Time Continual Model Merging [19.916880222546155]
Continual model merging integrates independently fine-tuned models sequentially without access to original training data.<n> MINGLE is a novel framework for test-time continual model merging using a small set of unlabeled test samples.<n> MINGLE consistently surpasses previous state-of-the-art methods by 7-9% on average across diverse task orders.
arXiv Detail & Related papers (2025-05-17T07:24:22Z) - UniSTD: Towards Unified Spatio-Temporal Learning across Diverse Disciplines [64.84631333071728]
We introduce bfUnistage, a unified Transformer-based framework fortemporal modeling.<n>Our work demonstrates that a task-specific vision-text can build a generalizable model fortemporal learning.<n>We also introduce a temporal module to incorporate temporal dynamics explicitly.
arXiv Detail & Related papers (2025-03-26T17:33:23Z) - Dynamic Allocation Hypernetwork with Adaptive Model Recalibration for Federated Continual Learning [49.508844889242425]
We propose a novel server-side FCL pattern in medical domain, Dynamic Allocation Hypernetwork with adaptive model recalibration (FedDAH)<n>FedDAH is designed to facilitate collaborative learning under the distinct and dynamic task streams across clients.<n>For the biased optimization, we introduce a novel adaptive model recalibration (AMR) to incorporate the candidate changes of historical models into current server updates.
arXiv Detail & Related papers (2025-03-25T00:17:47Z) - Dynamic Allocation Hypernetwork with Adaptive Model Recalibration for FCL [49.508844889242425]
We propose a novel server-side FCL pattern in medical domain, Dynamic Allocation Hypernetwork with adaptive model recalibration (textbfFedDAH)<n>For the biased optimization, we introduce a novel adaptive model recalibration (AMR) to incorporate the candidate changes of historical models into current server updates.<n>Experiments on the AMOS dataset demonstrate the superiority of our FedDAH to other FCL methods on sites with different task streams.
arXiv Detail & Related papers (2025-03-23T13:12:56Z) - Model Evolution Framework with Genetic Algorithm for Multi-Task Reinforcement Learning [85.91908329457081]
Multi-task reinforcement learning employs a single policy to complete various tasks, aiming to develop an agent with generalizability across different scenarios.<n>Existing approaches typically use a routing network to generate specific routes for each task and reconstruct a set of modules into diverse models to complete multiple tasks simultaneously.<n>We propose a Model Evolution framework with Genetic Algorithm (MEGA), which enables the model to evolve during training according to the difficulty of the tasks.
arXiv Detail & Related papers (2025-02-19T09:22:34Z) - An Effective-Efficient Approach for Dense Multi-Label Action Detection [23.100602876056165]
It is necessary to simultaneously learn (i) temporal dependencies and (ii) co-occurrence action relationships.
Recent approaches model temporal information by extracting multi-scale features through hierarchical transformer-based networks.
We argue that combining this with multiple sub-sampling processes in hierarchical designs can lead to further loss of positional information.
arXiv Detail & Related papers (2024-06-10T11:33:34Z) - TACTiS-2: Better, Faster, Simpler Attentional Copulas for Multivariate Time Series [57.4208255711412]
Building on copula theory, we propose a simplified objective for the recently-introduced transformer-based attentional copulas (TACTiS)
We show that the resulting model has significantly better training dynamics and achieves state-of-the-art performance across diverse real-world forecasting tasks.
arXiv Detail & Related papers (2023-10-02T16:45:19Z) - Exposing and Addressing Cross-Task Inconsistency in Unified
Vision-Language Models [80.23791222509644]
Inconsistent AI models are considered brittle and untrustworthy by human users.
We find that state-of-the-art vision-language models suffer from a surprisingly high degree of inconsistent behavior across tasks.
We propose a rank correlation-based auxiliary training objective, computed over large automatically created cross-task contrast sets.
arXiv Detail & Related papers (2023-03-28T16:57:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.