AdaSemSeg: An Adaptive Few-shot Semantic Segmentation of Seismic Facies
- URL: http://arxiv.org/abs/2501.16760v1
- Date: Tue, 28 Jan 2025 07:31:09 GMT
- Title: AdaSemSeg: An Adaptive Few-shot Semantic Segmentation of Seismic Facies
- Authors: Surojit Saha, Ross Whitaker,
- Abstract summary: Few-shot semantic segmentation (FSSS) methods fix the number of target classes.
We propose a few-shot semantic segmentation method for interpreting seismic facies that can adapt to varying number of facies.
We have trained the AdaSemSeg on three public seismic facies datasets with different numbers of facies.
- Score: 0.6138671548064355
- License:
- Abstract: Automated interpretation of seismic images using deep learning methods is challenging because of the limited availability of training data. Few-shot learning is a suitable learning paradigm in such scenarios due to its ability to adapt to a new task with limited supervision (small training budget). Existing few-shot semantic segmentation (FSSS) methods fix the number of target classes. Therefore, they do not support joint training on multiple datasets varying in the number of classes. In the context of the interpretation of seismic facies, fixing the number of target classes inhibits the generalization capability of a model trained on one facies dataset to another, which is likely to have a different number of facies. To address this shortcoming, we propose a few-shot semantic segmentation method for interpreting seismic facies that can adapt to the varying number of facies across the dataset, dubbed the AdaSemSeg. In general, the backbone network of FSSS methods is initialized with the statistics learned from the ImageNet dataset for better performance. The lack of such a huge annotated dataset for seismic images motivates using a self-supervised algorithm on seismic datasets to initialize the backbone network. We have trained the AdaSemSeg on three public seismic facies datasets with different numbers of facies and evaluated the proposed method on multiple metrics. The performance of the AdaSemSeg on unseen datasets (not used in training) is better than the prototype-based few-shot method and baselines.
Related papers
- Physically Feasible Semantic Segmentation [58.17907376475596]
State-of-the-art semantic segmentation models are typically optimized in a data-driven fashion, minimizing solely per-pixel or per-segment classification objectives on their training data.
This purely data-driven paradigm often leads to absurd segmentations, especially when the domain of input images is shifted from the one encountered during training.
Our method, Physically Feasible Semantic (PhyFea), first extracts explicit constraints that govern spatial class relations from the semantic segmentation training set at hand in an offline data-driven fashion, and then enforces a morphological yet differentiable loss that penalizes violations of these constraints during
arXiv Detail & Related papers (2024-08-26T22:39:08Z) - Cross-Level Distillation and Feature Denoising for Cross-Domain Few-Shot
Classification [49.36348058247138]
We tackle the problem of cross-domain few-shot classification by making a small proportion of unlabeled images in the target domain accessible in the training stage.
We meticulously design a cross-level knowledge distillation method, which can strengthen the ability of the model to extract more discriminative features in the target dataset.
Our approach can surpass the previous state-of-the-art method, Dynamic-Distillation, by 5.44% on 1-shot and 1.37% on 5-shot classification tasks.
arXiv Detail & Related papers (2023-11-04T12:28:04Z) - Cooperative Self-Training for Multi-Target Adaptive Semantic
Segmentation [26.79776306494929]
We propose a self-training strategy that employs pseudo-labels to induce cooperation among multiple domain-specific classifiers.
We employ feature stylization as an efficient way to generate image views that forms an integral part of self-training.
arXiv Detail & Related papers (2022-10-04T13:03:17Z) - Learning from Temporal Spatial Cubism for Cross-Dataset Skeleton-based
Action Recognition [88.34182299496074]
Action labels are only available on a source dataset, but unavailable on a target dataset in the training stage.
We utilize a self-supervision scheme to reduce the domain shift between two skeleton-based action datasets.
By segmenting and permuting temporal segments or human body parts, we design two self-supervised learning classification tasks.
arXiv Detail & Related papers (2022-07-17T07:05:39Z) - Learning Debiased and Disentangled Representations for Semantic
Segmentation [52.35766945827972]
We propose a model-agnostic and training scheme for semantic segmentation.
By randomly eliminating certain class information in each training iteration, we effectively reduce feature dependencies among classes.
Models trained with our approach demonstrate strong results on multiple semantic segmentation benchmarks.
arXiv Detail & Related papers (2021-10-31T16:15:09Z) - Multi-dataset Pretraining: A Unified Model for Semantic Segmentation [97.61605021985062]
We propose a unified framework, termed as Multi-Dataset Pretraining, to take full advantage of the fragmented annotations of different datasets.
This is achieved by first pretraining the network via the proposed pixel-to-prototype contrastive loss over multiple datasets.
In order to better model the relationship among images and classes from different datasets, we extend the pixel level embeddings via cross dataset mixing.
arXiv Detail & Related papers (2021-06-08T06:13:11Z) - SML: Semantic Meta-learning for Few-shot Semantic Segmentation [27.773396307292497]
We propose a novel meta-learning framework, Semantic Meta-Learning, which incorporates class-level semantic descriptions in the generated prototypes for this problem.
In addition, we propose to use the well established technique, ridge regression, to not only bring in the class-level semantic information, but also to effectively utilise the information available from multiple images present in the training data for prototype computation.
arXiv Detail & Related papers (2020-09-14T18:26:46Z) - Region Comparison Network for Interpretable Few-shot Image
Classification [97.97902360117368]
Few-shot image classification has been proposed to effectively use only a limited number of labeled examples to train models for new classes.
We propose a metric learning based method named Region Comparison Network (RCN), which is able to reveal how few-shot learning works.
We also present a new way to generalize the interpretability from the level of tasks to categories.
arXiv Detail & Related papers (2020-09-08T07:29:05Z) - Weakly-supervised Object Localization for Few-shot Learning and
Fine-grained Few-shot Learning [0.5156484100374058]
Few-shot learning aims to learn novel visual categories from very few samples.
We propose a Self-Attention Based Complementary Module (SAC Module) to fulfill the weakly-supervised object localization.
We also produce the activated masks for selecting discriminative deep descriptors for few-shot classification.
arXiv Detail & Related papers (2020-03-02T14:07:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.