A self-training framework for glaucoma grading in OCT B-scans
- URL: http://arxiv.org/abs/2111.11771v1
- Date: Tue, 23 Nov 2021 10:33:55 GMT
- Title: A self-training framework for glaucoma grading in OCT B-scans
- Authors: Gabriel Garc\'ia, Adri\'an Colomer, Rafael Verd\'u-Monedero, Jos\'e
Dolz, Valery Naranjo
- Abstract summary: We present a self-training-based framework for glaucoma grading using OCT B-scans under the presence of domain shift.
A two-step learning methodology resorts to pseudo-labels generated during the first step to augment the training dataset on the target domain.
We propose a novel glaucoma-specific backbone which introduces residual and attention modules via skip-connections to refine the embedding features of the latent space.
- Score: 6.382852973055393
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper, we present a self-training-based framework for glaucoma
grading using OCT B-scans under the presence of domain shift. Particularly, the
proposed two-step learning methodology resorts to pseudo-labels generated
during the first step to augment the training dataset on the target domain,
which is then used to train the final target model. This allows transferring
knowledge-domain from the unlabeled data. Additionally, we propose a novel
glaucoma-specific backbone which introduces residual and attention modules via
skip-connections to refine the embedding features of the latent space. By doing
this, our model is capable of improving state-of-the-art from a quantitative
and interpretability perspective. The reported results demonstrate that the
proposed learning strategy can boost the performance of the model on the target
dataset without incurring in additional annotation steps, by using only labels
from the source examples. Our model consistently outperforms the baseline by
1-3% across different metrics and bridges the gap with respect to training the
model on the labeled target data.
Related papers
- ZeroG: Investigating Cross-dataset Zero-shot Transferability in Graphs [36.749959232724514]
ZeroG is a new framework tailored to enable cross-dataset generalization.
We address the inherent challenges such as feature misalignment, mismatched label spaces, and negative transfer.
We propose a prompt-based subgraph sampling module that enriches the semantic information and structure information of extracted subgraphs.
arXiv Detail & Related papers (2024-02-17T09:52:43Z) - Distill-SODA: Distilling Self-Supervised Vision Transformer for
Source-Free Open-Set Domain Adaptation in Computational Pathology [12.828728138651266]
Development of computational pathology models is essential for reducing manual tissue typing from whole slide images.
We propose a practical setting by addressing the above-mentioned challenges in one fell swoop, i.e., source-free open-set domain adaptation.
Our methodology focuses on adapting a pre-trained source model to an unlabeled target dataset.
arXiv Detail & Related papers (2023-07-10T14:36:51Z) - Universal Domain Adaptation from Foundation Models: A Baseline Study [58.51162198585434]
We make empirical studies of state-of-the-art UniDA methods using foundation models.
We introduce textitCLIP distillation, a parameter-free method specifically designed to distill target knowledge from CLIP models.
Although simple, our method outperforms previous approaches in most benchmark tasks.
arXiv Detail & Related papers (2023-05-18T16:28:29Z) - Learning from Temporal Spatial Cubism for Cross-Dataset Skeleton-based
Action Recognition [88.34182299496074]
Action labels are only available on a source dataset, but unavailable on a target dataset in the training stage.
We utilize a self-supervision scheme to reduce the domain shift between two skeleton-based action datasets.
By segmenting and permuting temporal segments or human body parts, we design two self-supervised learning classification tasks.
arXiv Detail & Related papers (2022-07-17T07:05:39Z) - Unified Instance and Knowledge Alignment Pretraining for Aspect-based
Sentiment Analysis [96.53859361560505]
Aspect-based Sentiment Analysis (ABSA) aims to determine the sentiment polarity towards an aspect.
There always exists severe domain shift between the pretraining and downstream ABSA datasets.
We introduce a unified alignment pretraining framework into the vanilla pretrain-finetune pipeline.
arXiv Detail & Related papers (2021-10-26T04:03:45Z) - Source-Free Open Compound Domain Adaptation in Semantic Segmentation [99.82890571842603]
In SF-OCDA, only the source pre-trained model and the target data are available to learn the target model.
We propose the Cross-Patch Style Swap (CPSS) to diversify samples with various patch styles in the feature-level.
Our method produces state-of-the-art results on the C-Driving dataset.
arXiv Detail & Related papers (2021-06-07T08:38:41Z) - TraND: Transferable Neighborhood Discovery for Unsupervised Cross-domain
Gait Recognition [77.77786072373942]
This paper proposes a Transferable Neighborhood Discovery (TraND) framework to bridge the domain gap for unsupervised cross-domain gait recognition.
We design an end-to-end trainable approach to automatically discover the confident neighborhoods of unlabeled samples in the latent space.
Our method achieves state-of-the-art results on two public datasets, i.e., CASIA-B and OU-LP.
arXiv Detail & Related papers (2021-02-09T03:07:07Z) - Adversarial Bipartite Graph Learning for Video Domain Adaptation [50.68420708387015]
Domain adaptation techniques, which focus on adapting models between distributionally different domains, are rarely explored in the video recognition area.
Recent works on visual domain adaptation which leverage adversarial learning to unify the source and target video representations are not highly effective on the videos.
This paper proposes an Adversarial Bipartite Graph (ABG) learning framework which directly models the source-target interactions.
arXiv Detail & Related papers (2020-07-31T03:48:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.