Importance inversion transfer identifies shared principles for cross-domain learning
- URL: http://arxiv.org/abs/2602.09116v2
- Date: Wed, 11 Feb 2026 08:58:15 GMT
- Title: Importance inversion transfer identifies shared principles for cross-domain learning
- Authors: Daniele Caligiore,
- Abstract summary: This study formalizes a framework unifying network science and explainable artificial intelligence.<n>It prioritizes structural invariants that generalize across biological, linguistic, molecular, and social networks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The capacity to transfer knowledge across scientific domains relies on shared organizational principles. However, existing transfer-learning methodologies often fail to bridge radically heterogeneous systems, particularly under severe data scarcity or stochastic noise. This study formalizes Explainable Cross-Domain Transfer Learning (X-CDTL), a framework unifying network science and explainable artificial intelligence to identify structural invariants that generalize across biological, linguistic, molecular, and social networks. By introducing the Importance Inversion Transfer (IIT) mechanism, the framework prioritizes domain-invariant structural anchors over idiosyncratic, highly discriminative features. In anomaly detection tasks, models guided by these principles achieve significant performance gains - exhibiting a 56% relative improvement in decision stability under extreme noise - over traditional baselines. These results provide evidence for a shared organizational signature across heterogeneous domains, establishing a principled paradigm for cross-disciplinary knowledge propagation. By shifting from opaque latent representations to explicit structural laws, this work advances machine learning as a robust engine for scientific discovery.
Related papers
- Towards a Science of Collective AI: LLM-based Multi-Agent Systems Need a Transition from Blind Trial-and-Error to Rigorous Science [70.3658845234978]
Large Language Models (LLMs) have greatly extended the capabilities of Multi-Agent Systems (MAS)<n>Despite this rapid progress, the field still relies heavily on empirical trial-and-error.<n>This bottleneck stems from the ambiguity of attribution.<n>We propose a factor attribution paradigm to systematically identify collaboration-driving factors.
arXiv Detail & Related papers (2026-02-05T04:19:52Z) - Adapting, Fast and Slow: Transportable Circuits for Few-Shot Learning [54.930879235929204]
Generalization across the domains is not possible without asserting a structure that constrains the unseen target domain w.r.t.<n>We design an algorithm for zero-shot compositional generalization which relies on access to qualitative domain knowledge.<n>Our theoretical results characterize classes of few-shot learnable tasks in terms of graphical circuit transportability criteria.
arXiv Detail & Related papers (2025-12-28T04:38:43Z) - Domain Translation of a Soft Robotic Arm using Conditional Cycle Generative Adversarial Network [0.8624680612413766]
We introduce a domain translation framework based on a conditional cycle generative adversarial network (CCGAN)<n>Our model learns from input pressure signals conditioned on corresponding end-effector positions and orientations in both domains.
arXiv Detail & Related papers (2025-08-16T15:47:35Z) - CTRLS: Chain-of-Thought Reasoning via Latent State-Transition [57.51370433303236]
Chain-of-thought (CoT) reasoning enables large language models to break down complex problems into interpretable intermediate steps.<n>We introduce groundingS, a framework that formulates CoT reasoning as a Markov decision process (MDP) with latent state transitions.<n>We show improvements in reasoning accuracy, diversity, and exploration efficiency across benchmark reasoning tasks.
arXiv Detail & Related papers (2025-07-10T21:32:18Z) - Feature-Based vs. GAN-Based Learning from Demonstrations: When and Why [50.191655141020505]
This survey provides a comparative analysis of feature-based and GAN-based approaches to learning from demonstrations.<n>We argue that the dichotomy between feature-based and GAN-based methods is increasingly nuanced.
arXiv Detail & Related papers (2025-07-08T11:45:51Z) - Multi-Domain Graph Foundation Models: Robust Knowledge Transfer via Topology Alignment [9.215549756572976]
Real-world graphs are often sparse and prone to noisy connections and adversarial attacks.<n>We propose the Multi-Domain Graph Foundation Model (MDGFM), a unified framework that aligns and leverages cross-domain topological information.<n>By aligning topologies, MDGFM not only improves multi-domain pre-training but also enables robust knowledge transfer to unseen domains.
arXiv Detail & Related papers (2025-02-04T05:09:23Z) - Decoupling Knowledge and Reasoning in Transformers: A Modular Architecture with Generalized Cross-Attention [9.401360346241296]
This paper introduces a novel modular Transformer architecture that explicitly decouples knowledge and reasoning.<n>We provide a rigorous mathematical derivation demonstrating that the Feed-Forward Network (FFN) in a standard Transformer is a specialized case.
arXiv Detail & Related papers (2025-01-01T12:55:57Z) - Causal Temporal Representation Learning with Nonstationary Sparse Transition [22.6420431022419]
Causal Temporal Representation Learning (Ctrl) methods aim to identify the temporal causal dynamics of complex nonstationary temporal sequences.
This work adopts a sparse transition assumption, aligned with intuitive human understanding, and presents identifiability results from a theoretical perspective.
We introduce a novel framework, Causal Temporal Representation Learning with Nonstationary Sparse Transition (CtrlNS), designed to leverage the constraints on transition sparsity.
arXiv Detail & Related papers (2024-09-05T00:38:27Z) - Identifiable Exchangeable Mechanisms for Causal Structure and Representation Learning [54.69189620971405]
We provide a unified framework, termed Identifiable Exchangeable Mechanisms (IEM), for representation and structure learning.<n>IEM provides new insights that let us relax the necessary conditions for causal structure identification in exchangeable non-i.i.d. data.<n>We also demonstrate the existence of a duality condition in identifiable representation learning, leading to new identifiability results.
arXiv Detail & Related papers (2024-06-20T13:30:25Z) - DIGIC: Domain Generalizable Imitation Learning by Causal Discovery [69.13526582209165]
Causality has been combined with machine learning to produce robust representations for domain generalization.
We make a different attempt by leveraging the demonstration data distribution to discover causal features for a domain generalizable policy.
We design a novel framework, called DIGIC, to identify the causal features by finding the direct cause of the expert action from the demonstration data distribution.
arXiv Detail & Related papers (2024-02-29T07:09:01Z) - Foundations for Transfer in Reinforcement Learning: A Taxonomy of
Knowledge Modalities [28.65224261733876]
We look at opportunities and challenges in refining the generalisation and transfer of knowledge.
Within the domain of reinforcement learning (RL), the representation of knowledge manifests through various modalities.
This taxonomy systematically targets these modalities and frames its discussion based on their inherent properties and alignment with different objectives and mechanisms for transfer.
arXiv Detail & Related papers (2023-12-04T14:55:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.