DistillFSS: Synthesizing Few-Shot Knowledge into a Lightweight Segmentation Model
- URL: http://arxiv.org/abs/2512.05613v1
- Date: Fri, 05 Dec 2025 10:54:23 GMT
- Title: DistillFSS: Synthesizing Few-Shot Knowledge into a Lightweight Segmentation Model
- Authors: Pasquale De Marinis, Pieter M. Blok, Uzay Kaymak, Rogier Brussee, Gennaro Vessio, Giovanna Castellano,
- Abstract summary: Cross-Domain Few-Shot Semantics (CD-FSS) seeks to segment unknown classes in unseen domains.<n>We propose DistillFSS, a framework that embeds support-set knowledge directly into a model's parameters.<n>By internalizing few-shot reasoning into a dedicated layer within the student network, DistillFSS eliminates the need for support images at test time.
- Score: 8.487765630753048
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cross-Domain Few-Shot Semantic Segmentation (CD-FSS) seeks to segment unknown classes in unseen domains using only a few annotated examples. This setting is inherently challenging: source and target domains exhibit substantial distribution shifts, label spaces are disjoint, and support images are scarce--making standard episodic methods unreliable and computationally demanding at test time. To address these constraints, we propose DistillFSS, a framework that embeds support-set knowledge directly into a model's parameters through a teacher--student distillation process. By internalizing few-shot reasoning into a dedicated layer within the student network, DistillFSS eliminates the need for support images at test time, enabling fast, lightweight inference, while allowing efficient extension to novel classes in unseen domains through rapid teacher-driven specialization. Combined with fine-tuning, the approach scales efficiently to large support sets and significantly reduces computational overhead. To evaluate the framework under realistic conditions, we introduce a new CD-FSS benchmark spanning medical imaging, industrial inspection, and remote sensing, with disjoint label spaces and variable support sizes. Experiments show that DistillFSS matches or surpasses state-of-the-art baselines, particularly in multi-class and multi-shot scenarios, while offering substantial efficiency gains. The code is available at https://github.com/pasqualedem/DistillFSS.
Related papers
- Following the Teacher's Footsteps: Scheduled Checkpoint Distillation for Domain-Specific LLMs [5.786917616876281]
Large language models (LLMs) are challenging to deploy for domain-specific tasks due to their massive scale.<n>While distilling a finetuned LLM into a smaller student model is a promising alternative, the capacity gap between teacher and student often leads to suboptimal performance.<n>We propose a novel theoretical insight: a student can outperform its teacher if its advantage on a Student-Favored Subdomain outweighs its deficit on the Teacher-Favored Subdomain.
arXiv Detail & Related papers (2026-01-15T06:46:01Z) - Take a Peek: Efficient Encoder Adaptation for Few-Shot Semantic Segmentation via LoRA [10.406945969691781]
Few-shot semantic segmentation (FSS) aims to segment novel classes in query images using only a small support set.<n>We introduce textitTake a Peek (TaP), a method that enhances encoder adaptability for both FSS and cross-domain FSS.
arXiv Detail & Related papers (2025-12-11T10:47:01Z) - Multiple Stochastic Prompt Tuning for Few-shot Adaptation under Extreme Domain Shift [14.85375816073596]
We introduce multiple learnable prompts per class to capture diverse modes in visual representations arising from distribution shifts.<n>These prompts are modeled as learnable Gaussian distributions, enabling efficient exploration of the prompt parameter space.<n>Experiments and comparisons with state-of-the-art methods demonstrate the effectiveness of the proposed framework.
arXiv Detail & Related papers (2025-06-04T13:18:04Z) - Adapting In-Domain Few-Shot Segmentation to New Domains without Retraining [53.963279865355105]
Cross-domain few-shot segmentation (CD-FSS) aims to segment objects of novel classes in new domains.<n>Most CD-FSS methods redesign and retrain in-domain FSS models using various domain-generalization techniques.<n>We propose adapting informative model structures of the well-trained FSS model for target domains by learning domain characteristics from few-shot labeled support samples.
arXiv Detail & Related papers (2025-04-30T08:16:33Z) - DSV-LFS: Unifying LLM-Driven Semantic Cues with Visual Features for Robust Few-Shot Segmentation [2.7624021966289605]
Few-shot semantic segmentation (FSS) aims to enable models to segment novel/unseen object classes using only a limited number of labeled examples.<n>We propose a novel framework that utilizes large language models (LLMs) to adapt general class semantic information to the query image.<n>Our framework achieves state-of-the-art performance-by a significant margin-demonstrating superior generalization to novel classes and robustness across diverse scenarios.
arXiv Detail & Related papers (2025-03-06T01:42:28Z) - TAVP: Task-Adaptive Visual Prompt for Cross-domain Few-shot Segmentation [40.49924427388922]
We propose a task-adaptive auto-visual prompt framework for Cross-dominan Few-shot segmentation (CD-FSS)<n>We incorporate a Class Domain Task-Adaptive Auto-Prompt (CDTAP) module to enable class-domain feature extraction and generate high-quality, learnable visual prompts.<n>Our model outperforms the state-of-the-art CD-FSS approach, achieving an average accuracy improvement of 1.3% in the 1-shot setting and 11.76% in the 5-shot setting.
arXiv Detail & Related papers (2024-09-09T07:43:58Z) - Adapt Before Comparison: A New Perspective on Cross-Domain Few-Shot Segmentation [0.0]
Cross-domain few-shot segmentation (CD-FSS) has emerged.
We show test-time task-adaption is the key for successful CD-FSS.
Despite our self-restriction not to use any images other than the few labeled samples at test time, we achieve new state-of-the-art performance in CD-FSS.
arXiv Detail & Related papers (2024-02-27T15:43:53Z) - Cap2Aug: Caption guided Image to Image data Augmentation [41.53127698828463]
Cap2Aug is an image-to-image diffusion model-based data augmentation strategy using image captions as text prompts.
We generate captions from the limited training images and using these captions edit the training images using an image-to-image stable diffusion model.
This strategy generates augmented versions of images similar to the training images yet provides semantic diversity across the samples.
arXiv Detail & Related papers (2022-12-11T04:37:43Z) - Cross-domain Few-shot Segmentation with Transductive Fine-tuning [29.81009103722184]
We propose to transductively fine-tune the base model on a set of query images under the few-shot setting.
Our method could consistently and significantly improve the performance of prototypical FSS models in all cross-domain tasks.
arXiv Detail & Related papers (2022-11-27T06:44:41Z) - Novel Class Discovery in Semantic Segmentation [104.30729847367104]
We introduce a new setting of Novel Class Discovery in Semantic (NCDSS)
It aims at segmenting unlabeled images containing new classes given prior knowledge from a labeled set of disjoint classes.
In NCDSS, we need to distinguish the objects and background, and to handle the existence of multiple classes within an image.
We propose the Entropy-based Uncertainty Modeling and Self-training (EUMS) framework to overcome noisy pseudo-labels.
arXiv Detail & Related papers (2021-12-03T13:31:59Z) - Disentangled Feature Representation for Few-shot Image Classification [64.40410801469106]
We propose a novel Disentangled Feature Representation framework, dubbed DFR, for few-shot learning applications.
DFR can adaptively decouple the discriminative features that are modeled by the classification branch, from the class-irrelevant component of the variation branch.
In general, most of the popular deep few-shot learning methods can be plugged in as the classification branch, thus DFR can boost their performance on various few-shot tasks.
arXiv Detail & Related papers (2021-09-26T09:53:11Z) - Generalized Few-shot Semantic Segmentation [68.69434831359669]
We introduce a new benchmark called Generalized Few-Shot Semantic (GFS-Seg) to analyze the ability of simultaneously segmenting the novel categories.
It is the first study showing that previous representative state-of-the-art generalizations fall short in GFS-Seg.
We propose the Context-Aware Prototype Learning (CAPL) that significantly improves performance by 1) leveraging the co-occurrence prior knowledge from support samples, and 2) dynamically enriching contextual information to the conditioned, on the content of each query image.
arXiv Detail & Related papers (2020-10-11T10:13:21Z) - Region Comparison Network for Interpretable Few-shot Image
Classification [97.97902360117368]
Few-shot image classification has been proposed to effectively use only a limited number of labeled examples to train models for new classes.
We propose a metric learning based method named Region Comparison Network (RCN), which is able to reveal how few-shot learning works.
We also present a new way to generalize the interpretability from the level of tasks to categories.
arXiv Detail & Related papers (2020-09-08T07:29:05Z) - Prior Guided Feature Enrichment Network for Few-Shot Segmentation [64.91560451900125]
State-of-the-art semantic segmentation methods require sufficient labeled data to achieve good results.
Few-shot segmentation is proposed to tackle this problem by learning a model that quickly adapts to new classes with a few labeled support samples.
Theses frameworks still face the challenge of generalization ability reduction on unseen classes due to inappropriate use of high-level semantic information.
arXiv Detail & Related papers (2020-08-04T10:41:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.