SfMamba: Efficient Source-Free Domain Adaptation via Selective Scan Modeling
- URL: http://arxiv.org/abs/2601.08608v1
- Date: Tue, 13 Jan 2026 14:53:47 GMT
- Title: SfMamba: Efficient Source-Free Domain Adaptation via Selective Scan Modeling
- Authors: Xi Chen, Hongxun Yao, Sicheng Zhao, Jiankun Zhu, Jing Jiang, Kui Jiang,
- Abstract summary: Source-free domain adaptation (SFDA) tackles the challenge of adapting source-pretrained models to unlabeled target domains.<n>We propose a framework called SfMamba to fully explore the stable dependency in source-free model transfer.
- Score: 60.860172819390954
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Source-free domain adaptation (SFDA) tackles the critical challenge of adapting source-pretrained models to unlabeled target domains without access to source data, overcoming data privacy and storage limitations in real-world applications. However, existing SFDA approaches struggle with the trade-off between perception field and computational efficiency in domain-invariant feature learning. Recently, Mamba has offered a promising solution through its selective scan mechanism, which enables long-range dependency modeling with linear complexity. However, the Visual Mamba (i.e., VMamba) remains limited in capturing channel-wise frequency characteristics critical for domain alignment and maintaining spatial robustness under significant domain shifts. To address these, we propose a framework called SfMamba to fully explore the stable dependency in source-free model transfer. SfMamba introduces Channel-wise Visual State-Space block that enables channel-sequence scanning for domain-invariant feature extraction. In addition, SfMamba involves a Semantic-Consistent Shuffle strategy that disrupts background patch sequences in 2D selective scan while preserving prediction consistency to mitigate error accumulation. Comprehensive evaluations across multiple benchmarks show that SfMamba achieves consistently stronger performance than existing methods while maintaining favorable parameter efficiency, offering a practical solution for SFDA. Our code is available at https://github.com/chenxi52/SfMamba.
Related papers
- GMM-COMET: Continual Source-Free Universal Domain Adaptation via a Mean Teacher and Gaussian Mixture Model-Based Pseudo-Labeling [3.1744658275045103]
Unsupervised domain adaptation tackles the problem that domain shifts between training and test data impair the performance of neural networks in real-world applications.<n>This setting, known as source-free universal domain adaptation (SF-UniDA), has recently gained attention.<n>We present the first study on continual SF-UniDA, where the model must adapt sequentially to a stream of multiple different unlabeled target domains.
arXiv Detail & Related papers (2026-01-16T10:23:19Z) - Simulating Distribution Dynamics: Liquid Temporal Feature Evolution for Single-Domain Generalized Object Detection [58.25418970608328]
Single-Domain Generalized Object Detection (Single-DGOD) aims to transfer a detector trained on one source domain to multiple unknown domains.<n>Existing methods for Single-DGOD typically rely on discrete data augmentation or static perturbation methods to expand data diversity.<n>We propose a new method, which simulates the progressive evolution of features from the source domain to simulated latent distributions.
arXiv Detail & Related papers (2025-11-13T03:10:39Z) - Diffusion-Driven Progressive Target Manipulation for Source-Free Domain Adaptation [108.0345347464393]
Source-free domain adaptation (SFDA) is a challenging task that tackles domain shifts using only a pre-trained source model and unlabeled target data.<n>Non-generation SFDA methods suffer from unreliable pseudo-labels in challenging scenarios with large domain discrepancies.<n>We propose a novel generation-based framework named Diffusion-Driven Progressive Target Manipulation.
arXiv Detail & Related papers (2025-10-29T08:38:03Z) - Feature-Space Planes Searcher: A Universal Domain Adaptation Framework for Interpretability and Computational Efficiency [7.889121135601528]
Current unsupervised domain adaptation methods rely on fine-tuning feature extractors.<n>We propose Feature-space Planes Searcher (FPS) as a novel domain adaptation framework.<n>We show that FPS achieves competitive or superior performance to state-of-the-art methods.
arXiv Detail & Related papers (2025-08-26T05:39:21Z) - Scaling Vision Mamba Across Resolutions via Fractal Traversal [9.566046692165884]
Vision Mamba has recently emerged as a promising alternative to Transformer-based architectures.<n>We propose FractalMamba++, a vision backbone that leverages fractal-based patch serialization via Hilbert curves.<n>We show that FractalMamba++ consistently outperforms previous Mamba-based backbones.
arXiv Detail & Related papers (2025-05-20T08:08:28Z) - Let Synthetic Data Shine: Domain Reassembly and Soft-Fusion for Single Domain Generalization [68.41367635546183]
Single Domain Generalization aims to train models with consistent performance across diverse scenarios using data from a single source.<n>We propose Discriminative Domain Reassembly and Soft-Fusion (DRSF), a training framework leveraging synthetic data to improve model generalization.
arXiv Detail & Related papers (2025-03-17T18:08:03Z) - DA-Mamba: Domain Adaptive Hybrid Mamba-Transformer Based One-Stage Object Detection [0.3683202928838613]
We present the first domain-adaptive Mamba-based one-stage object detection model, DA-Mamba.<n>Inspired by the global modeling and linear complexity of the Mamba architecture, we present the first domain-adaptive Mamba-based one-stage object detection model, DA-Mamba.
arXiv Detail & Related papers (2025-02-16T15:58:54Z) - Memory-Efficient Pseudo-Labeling for Online Source-Free Universal Domain Adaptation using a Gaussian Mixture Model [3.1265626879839923]
In practice, domain shifts are likely to occur between training and test data, necessitating domain adaptation (DA) to adjust the pre-trained source model to the target domain.
UniDA has gained attention for addressing the possibility of an additional category (label) shift between the source and target domain.
We propose a novel method that continuously captures the distribution of known classes in the feature space using a Gaussian mixture model (GMM)
Our approach achieves state-of-the-art results in all experiments on the DomainNet and Office-Home datasets.
arXiv Detail & Related papers (2024-07-19T11:13:31Z) - StyDeSty: Min-Max Stylization and Destylization for Single Domain Generalization [85.18995948334592]
Single domain generalization (single DG) aims at learning a robust model generalizable to unseen domains from only one training domain.
State-of-the-art approaches have mostly relied on data augmentations, such as adversarial perturbation and style enhancement, to synthesize new data.
We propose emphStyDeSty, which explicitly accounts for the alignment of the source and pseudo domains in the process of data augmentation.
arXiv Detail & Related papers (2024-06-01T02:41:34Z) - Unsupervised Domain Adaptation via Style-Aware Self-intermediate Domain [52.783709712318405]
Unsupervised domain adaptation (UDA) has attracted considerable attention, which transfers knowledge from a label-rich source domain to a related but unlabeled target domain.<n>We propose a novel style-aware feature fusion method (SAFF) to bridge the large domain gap and transfer knowledge while alleviating the loss of class-discnative information.
arXiv Detail & Related papers (2022-09-05T10:06:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.