SAM-Aug: Leveraging SAM Priors for Few-Shot Parcel Segmentation in Satellite Time Series
- URL: http://arxiv.org/abs/2601.09110v1
- Date: Wed, 14 Jan 2026 03:18:04 GMT
- Title: SAM-Aug: Leveraging SAM Priors for Few-Shot Parcel Segmentation in Satellite Time Series
- Authors: Kai Hu, Yaozu Feng, Vladimir Lysenko, Ya Guo Member, Huayi Wu,
- Abstract summary: We propose SAM-Aug, a new annotation-efficient framework to improve few-shot land cover mapping.<n>Our approach constructs cloud-free composite images from temporal sequences and applies SAM in a fully unsupervised manner.<n>Experiments on the PASTIS-R benchmark under a 5 percent labeled setting demonstrate the effectiveness and robustness of SAM-Aug.
- Score: 3.4368348203064283
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Few-shot semantic segmentation of time-series remote sensing images remains a critical challenge, particularly in regions where labeled data is scarce or costly to obtain. While state-of-the-art models perform well under full supervision, their performance degrades significantly under limited labeling, limiting their real-world applicability. In this work, we propose SAM-Aug, a new annotation-efficient framework that leverages the geometry-aware segmentation capability of the Segment Anything Model (SAM) to improve few-shot land cover mapping. Our approach constructs cloud-free composite images from temporal sequences and applies SAM in a fully unsupervised manner to generate geometry-aware mask priors. These priors are then integrated into training through a proposed loss function called RegionSmoothLoss, which enforces prediction consistency within each SAM-derived region across temporal frames, effectively regularizing the model to respect semantically coherent structures. Extensive experiments on the PASTIS-R benchmark under a 5 percent labeled setting demonstrate the effectiveness and robustness of SAM-Aug. Averaged over three random seeds (42, 2025, 4090), our method achieves a mean test mIoU of 36.21 percent, outperforming the state-of-the-art baseline by +2.33 percentage points, a relative improvement of 6.89 percent. Notably, on the most favorable split (seed=42), SAM-Aug reaches a test mIoU of 40.28 percent, representing an 11.2 percent relative gain with no additional labeled data. The consistent improvement across all seeds confirms the generalization power of leveraging foundation model priors under annotation scarcity. Our results highlight that vision models like SAM can serve as useful regularizers in few-shot remote sensing learning, offering a scalable and plug-and-play solution for land cover monitoring without requiring manual annotations or model fine-tuning.
Related papers
- Sparse Layer Sharpness-Aware Minimization for Efficient Fine-Tuning [52.63618112418439]
Sharpness-aware computation (SAM) seeks the minima with a flat loss landscape to improve the generalization performance in machine learning tasks, including fine-tuning.<n>We propose an approach SL-SAM to break this bottleneck by introducing the sparse technique to layers.
arXiv Detail & Related papers (2026-02-10T04:05:43Z) - Wetland mapping from sparse annotations with satellite image time series and temporal-aware segment anything model [37.47356246646521]
We propose WetSAM, a framework that integrates satellite image time series for wetland mapping from sparse point supervision through a dual-branch design.<n>We show that WetSAM substantially outperforms state-of-the-art methods, achieving an average F1-score of 85.58%, and delivering accurate and structurally consistent wetland segmentation with minimal labeling effort.
arXiv Detail & Related papers (2026-01-16T16:10:32Z) - Boundary-Aware Test-Time Adaptation for Zero-Shot Medical Image Segmentation [12.159529070716824]
BA-TTA-SAM is a test-time adaptation framework that enhances the zero-shot segmentation performance of SAM via test-time adaptation.<n>Our framework consistently outperforms state-of-the-art models in medical image segmentation.
arXiv Detail & Related papers (2025-12-04T07:08:21Z) - TASAM: Terrain-and-Aware Segment Anything Model for Temporal-Scale Remote Sensing Segmentation [20.89385225170904]
Segment Anything Model (SAM) has demonstrated impressive zero-shot segmentation capabilities across natural image domains.<n>We introduce TASAM, a terrain and temporally-aware extension of SAM designed specifically for high-resolution remote sensing image segmentation.
arXiv Detail & Related papers (2025-09-19T09:24:24Z) - Promptable Anomaly Segmentation with SAM Through Self-Perception Tuning [63.55145330447408]
We propose a novel textbfSelf-textbfPerceptinon textbfTuning (textbfSPT) method for anomaly segmentation.<n>The SPT method incorporates a self-drafting tuning strategy, which generates an initial coarse draft of the anomaly mask, followed by a refinement process.
arXiv Detail & Related papers (2024-11-26T08:33:25Z) - RobustSAM: Segment Anything Robustly on Degraded Images [19.767828436963317]
Segment Anything Model (SAM) has emerged as a transformative approach in image segmentation.
We propose the Robust Segment Anything Model (RobustSAM), which enhances SAM's performance on low-quality images.
Our method has been shown to effectively improve the performance of SAM-based downstream tasks such as single image dehazing and deblurring.
arXiv Detail & Related papers (2024-06-13T23:33:59Z) - Stitching, Fine-tuning, Re-training: A SAM-enabled Framework for Semi-supervised 3D Medical Image Segmentation [40.79197318484472]
Segment Anything Model (SAM) fine-tuning has shown remarkable performance in medical image segmentation in a fully supervised manner.<n>We propose a three-stage framework, i.e., Stitching, Fine-tuning, and Re-training (SFR)<n>Our SFR framework is plug-and-play, and easily compatible with various popular semi-supervised methods.
arXiv Detail & Related papers (2024-03-17T14:30:56Z) - Stable Segment Anything Model [79.9005670886038]
The Segment Anything Model (SAM) achieves remarkable promptable segmentation given high-quality prompts.
This paper presents the first comprehensive analysis on SAM's segmentation stability across a diverse spectrum of prompt qualities.
Our solution, termed Stable-SAM, offers several advantages: 1) improved SAM's segmentation stability across a wide range of prompt qualities, while 2) retaining SAM's powerful promptable segmentation efficiency and generality.
arXiv Detail & Related papers (2023-11-27T12:51:42Z) - Zero-Shot Refinement of Buildings' Segmentation Models using SAM [6.110856077714895]
We present a novel approach to adapt foundation models to address existing models' generalization dropback.
Among several models, our focus centers on the Segment Anything Model (SAM)
SAM does not offer recognition abilities and thus fails to classify and tag localized objects.
This novel approach augments SAM with recognition abilities, a first of its kind.
arXiv Detail & Related papers (2023-10-03T07:19:59Z) - Segment Any Point Cloud Sequences by Distilling Vision Foundation Models [55.12618600523729]
Seal is a framework that harnesses vision foundation models (VFMs) for segmenting diverse automotive point cloud sequences.
Seal exhibits three appealing properties: Scalability, consistency and generalizability.
arXiv Detail & Related papers (2023-06-15T17:59:54Z) - Improving Visual Grounding by Encouraging Consistent Gradient-based
Explanations [58.442103936918805]
We show that Attention Mask Consistency produces superior visual grounding results than previous methods.
AMC is effective, easy to implement, and is general as it can be adopted by any vision-language model.
arXiv Detail & Related papers (2022-06-30T17:55:12Z) - Efficient Sharpness-aware Minimization for Improved Training of Neural
Networks [146.2011175973769]
This paper proposes Efficient Sharpness Aware Minimizer (M) which boosts SAM s efficiency at no cost to its generalization performance.
M includes two novel and efficient training strategies-StochasticWeight Perturbation and Sharpness-Sensitive Data Selection.
We show, via extensive experiments on the CIFAR and ImageNet datasets, that ESAM enhances the efficiency over SAM from requiring 100% extra computations to 40% vis-a-vis bases.
arXiv Detail & Related papers (2021-10-07T02:20:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.