SynEVO: A neuro-inspired spatiotemporal evolutional framework for cross-domain adaptation
- URL: http://arxiv.org/abs/2505.16080v1
- Date: Wed, 21 May 2025 23:45:51 GMT
- Title: SynEVO: A neuro-inspired spatiotemporal evolutional framework for cross-domain adaptation
- Authors: Jiayue Liu, Zhongchao Yi, Zhengyang Zhou, Qihe Huang, Kuo Yang, Xu Wang, Yang Wang,
- Abstract summary: The key towards increasing cross-domain knowledge is to enable collective andtemporal intelligence model evolution.<n>We propose a Synaptic EVOtic network, where SynEVO breaks the model independence and enables cross-domain knowledge to be shared and aggregated.<n>Experiments show that SynEVO improves the generalization by most under cross-domain scenarios.
- Score: 12.965961860022427
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Discovering regularities from spatiotemporal systems can benefit various scientific and social planning. Current spatiotemporal learners usually train an independent model from a specific source data that leads to limited transferability among sources, where even correlated tasks requires new design and training. The key towards increasing cross-domain knowledge is to enable collective intelligence and model evolution. In this paper, inspired by neuroscience theories, we theoretically derive the increased information boundary via learning cross-domain collective intelligence and propose a Synaptic EVOlutional spatiotemporal network, SynEVO, where SynEVO breaks the model independence and enables cross-domain knowledge to be shared and aggregated. Specifically, we first re-order the sample groups to imitate the human curriculum learning, and devise two complementary learners, elastic common container and task-independent extractor to allow model growth and task-wise commonality and personality disentanglement. Then an adaptive dynamic coupler with a new difference metric determines whether the new sample group should be incorporated into common container to achieve model evolution under various domains. Experiments show that SynEVO improves the generalization capacity by at most 42% under cross-domain scenarios and SynEVO provides a paradigm of NeuroAI for knowledge transfer and adaptation.
Related papers
- Confounder-Free Continual Learning via Recursive Feature Normalization [8.644711503479988]
Confounders are extraneous variables that affect both the input and the target, resulting in spurious correlations and biased predictions.<n>We introduce the Recursive MDN layer, which can be integrated into any deep learning architecture.
arXiv Detail & Related papers (2025-07-11T21:25:31Z) - UniSTD: Towards Unified Spatio-Temporal Learning across Diverse Disciplines [64.84631333071728]
We introduce bfUnistage, a unified Transformer-based framework fortemporal modeling.<n>Our work demonstrates that a task-specific vision-text can build a generalizable model fortemporal learning.<n>We also introduce a temporal module to incorporate temporal dynamics explicitly.
arXiv Detail & Related papers (2025-03-26T17:33:23Z) - Neuron: Learning Context-Aware Evolving Representations for Zero-Shot Skeleton Action Recognition [64.56321246196859]
We propose a novel dyNamically Evolving dUal skeleton-semantic syneRgistic framework.<n>We first construct the spatial-temporal evolving micro-prototypes and integrate dynamic context-aware side information.<n>We introduce the spatial compression and temporal memory mechanisms to guide the growth of spatial-temporal micro-prototypes.
arXiv Detail & Related papers (2024-11-18T05:16:11Z) - RIGL: A Unified Reciprocal Approach for Tracing the Independent and Group Learning Processes [22.379764500005503]
We propose RIGL, a unified Reciprocal model to trace knowledge states at both the individual and group levels.
In this paper, we introduce a time frame-aware reciprocal embedding module to concurrently model both student and group response interactions.
We design a relation-guided temporal attentive network, comprised of dynamic graph modeling coupled with a temporal self-attention mechanism.
arXiv Detail & Related papers (2024-06-18T10:16:18Z) - Learning Divergence Fields for Shift-Robust Graph Representations [73.11818515795761]
In this work, we propose a geometric diffusion model with learnable divergence fields for the challenging problem with interdependent data.
We derive a new learning objective through causal inference, which can guide the model to learn generalizable patterns of interdependence that are insensitive across domains.
arXiv Detail & Related papers (2024-06-07T14:29:21Z) - Cognitive Evolutionary Learning to Select Feature Interactions for Recommender Systems [59.117526206317116]
We show that CELL can adaptively evolve into different models for different tasks and data.
Experiments on four real-world datasets demonstrate that CELL significantly outperforms state-of-the-art baselines.
arXiv Detail & Related papers (2024-05-29T02:35:23Z) - Demolition and Reinforcement of Memories in Spin-Glass-like Neural
Networks [0.0]
The aim of this thesis is to understand the effectiveness of Unlearning in both associative memory models and generative models.
The selection of structured data enables an associative memory model to retrieve concepts as attractors of a neural dynamics with considerable basins of attraction.
A novel regularization technique for Boltzmann Machines is presented, proving to outperform previously developed methods in learning hidden probability distributions from data-sets.
arXiv Detail & Related papers (2024-03-04T23:12:42Z) - Diffusion-Based Neural Network Weights Generation [80.89706112736353]
D2NWG is a diffusion-based neural network weights generation technique that efficiently produces high-performing weights for transfer learning.
Our method extends generative hyper-representation learning to recast the latent diffusion paradigm for neural network weights generation.
Our approach is scalable to large architectures such as large language models (LLMs), overcoming the limitations of current parameter generation techniques.
arXiv Detail & Related papers (2024-02-28T08:34:23Z) - When Large Language Models Meet Evolutionary Algorithms: Potential Enhancements and Challenges [50.280704114978384]
Pre-trained large language models (LLMs) exhibit powerful capabilities for generating natural text.<n> Evolutionary algorithms (EAs) can discover diverse solutions to complex real-world problems.
arXiv Detail & Related papers (2024-01-19T05:58:30Z) - On sparse regression, Lp-regularization, and automated model discovery [0.0]
We show that Lp regularized neural networks can simultaneously discover both, interpretable models and physically meaningful parameters.
Our ability to automatically discover material models from data could have tremendous applications in generative material design.
arXiv Detail & Related papers (2023-10-09T05:34:21Z) - Incorporating Neuro-Inspired Adaptability for Continual Learning in
Artificial Intelligence [59.11038175596807]
Continual learning aims to empower artificial intelligence with strong adaptability to the real world.
Existing advances mainly focus on preserving memory stability to overcome catastrophic forgetting.
We propose a generic approach that appropriately attenuates old memories in parameter distributions to improve learning plasticity.
arXiv Detail & Related papers (2023-08-29T02:43:58Z) - Characterizing and overcoming the greedy nature of learning in
multi-modal deep neural networks [62.48782506095565]
We show that due to the greedy nature of learning in deep neural networks, models tend to rely on just one modality while under-fitting the other modalities.
We propose an algorithm to balance the conditional learning speeds between modalities during training and demonstrate that it indeed addresses the issue of greedy learning.
arXiv Detail & Related papers (2022-02-10T20:11:21Z) - Canoe : A System for Collaborative Learning for Neural Nets [4.547883122787855]
Canoe is a framework that facilitates knowledge transfer for neural networks.
Canoe provides new system support for dynamically extracting significant parameters from a helper node's neural network.
The evaluation of Canoe with different PyTorch and neural network models demonstrates that the knowledge transfer mechanism improves the model's adaptiveness to 3.5X compared to learning in isolation.
arXiv Detail & Related papers (2021-08-27T05:30:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.