Adapting, Fast and Slow: Transportable Circuits for Few-Shot Learning
- URL: http://arxiv.org/abs/2512.22777v1
- Date: Sun, 28 Dec 2025 04:38:43 GMT
- Title: Adapting, Fast and Slow: Transportable Circuits for Few-Shot Learning
- Authors: Kasra Jalaldoust, Elias Bareinboim,
- Abstract summary: Generalization across the domains is not possible without asserting a structure that constrains the unseen target domain w.r.t.<n>We design an algorithm for zero-shot compositional generalization which relies on access to qualitative domain knowledge.<n>Our theoretical results characterize classes of few-shot learnable tasks in terms of graphical circuit transportability criteria.
- Score: 54.930879235929204
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generalization across the domains is not possible without asserting a structure that constrains the unseen target domain w.r.t. the source domain. Building on causal transportability theory, we design an algorithm for zero-shot compositional generalization which relies on access to qualitative domain knowledge in form of a causal graph for intra-domain structure and discrepancies oracle for inter-domain mechanism sharing. \textit{Circuit-TR} learns a collection of modules (i.e., local predictors) from the source data, and transport/compose them to obtain a circuit for prediction in the target domain if the causal structure licenses. Furthermore, circuit transportability enables us to design a supervised domain adaptation scheme that operates without access to an explicit causal structure, and instead uses limited target data. Our theoretical results characterize classes of few-shot learnable tasks in terms of graphical circuit transportability criteria, and connects few-shot generalizability with the established notion of circuit size complexity; controlled simulations corroborate our theoretical results.
Related papers
- A Unified Analysis of Generalization and Sample Complexity for Semi-Supervised Domain Adaptation [1.9567015559455132]
Domain adaptation seeks to leverage the abundant label information in a source domain to improve classification performance in a target domain with limited labels.<n>Most existing theoretical analyses focus on simplified settings where the source and target domains share the same input space.<n>We present a comprehensive theoretical study of domain adaptation algorithms based on domain alignment.
arXiv Detail & Related papers (2025-07-30T12:53:08Z) - A Pre-training Framework for Relational Data with Information-theoretic Principles [57.93973948947743]
We introduce Task Vector Estimation (TVE), a novel pre-training framework that constructs supervisory signals via set-based aggregation over relational graphs.<n>TVE consistently outperforms traditional pre-training baselines.<n>Our findings advocate for pre-training objectives that encode task heterogeneity and temporal structure as design principles for predictive modeling on relational databases.
arXiv Detail & Related papers (2025-07-14T00:17:21Z) - Partial Transportability for Domain Generalization [56.37032680901525]
Building on the theory of partial identification and transportability, this paper introduces new results for bounding the value of a functional of the target distribution.<n>Our contribution is to provide the first general estimation technique for transportability problems.<n>We propose a gradient-based optimization scheme for making scalable inferences in practice.
arXiv Detail & Related papers (2025-03-30T22:06:37Z) - Logifold: A Geometrical Foundation of Ensemble Machine Learning [0.0]
We present a local-to-global and measure-theoretical approach to understanding datasets.
The core idea is to formulate a logifold structure and to interpret network models with restricted domains as local charts of datasets.
arXiv Detail & Related papers (2024-07-23T04:47:58Z) - DIGIC: Domain Generalizable Imitation Learning by Causal Discovery [69.13526582209165]
Causality has been combined with machine learning to produce robust representations for domain generalization.
We make a different attempt by leveraging the demonstration data distribution to discover causal features for a domain generalizable policy.
We design a novel framework, called DIGIC, to identify the causal features by finding the direct cause of the expert action from the demonstration data distribution.
arXiv Detail & Related papers (2024-02-29T07:09:01Z) - Overcoming Shortcut Learning in a Target Domain by Generalizing Basic
Visual Factors from a Source Domain [7.012240324005977]
Shortcut learning occurs when a deep neural network overly relies on spurious correlations in the training dataset to solve downstream tasks.
We propose a novel approach to mitigate shortcut learning in uncontrolled target domains.
arXiv Detail & Related papers (2022-07-20T16:05:32Z) - Relation Matters: Foreground-aware Graph-based Relational Reasoning for
Domain Adaptive Object Detection [81.07378219410182]
We propose a new and general framework for DomainD, named Foreground-aware Graph-based Reasoning (FGRR)
FGRR incorporates graph structures into the detection pipeline to explicitly model the intra- and inter-domain foreground object relations.
Empirical results demonstrate that the proposed FGRR exceeds the state-of-the-art on four DomainD benchmarks.
arXiv Detail & Related papers (2022-06-06T05:12:48Z) - Unsupervised Domain Adaptation for Image Classification via
Structure-Conditioned Adversarial Learning [70.79486026698419]
Unsupervised domain adaptation (UDA) typically carries out knowledge transfer from a label-rich source domain to an unlabeled target domain by adversarial learning.
We propose an end-to-end structure-conditioned adversarial learning scheme (SCAL) that is able to preserve the intra-class compactness during domain distribution alignment.
arXiv Detail & Related papers (2021-03-04T03:12:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.