Clue Me In: Semi-Supervised FGVC with Out-of-Distribution Data
- URL: http://arxiv.org/abs/2112.02825v1
- Date: Mon, 6 Dec 2021 07:22:10 GMT
- Title: Clue Me In: Semi-Supervised FGVC with Out-of-Distribution Data
- Authors: Ruoyi Du, Dongliang Chang, Zhanyu Ma, Yi-Zhe Song, Jun Guo
- Abstract summary: We propose a novel design specifically aimed at making out-of-distribution data work for semi-supervised visual classification.
Our experimental results reveal that (i) the proposed method yields good robustness against out-of-distribution data, and (ii) it can be equipped with prior arts, boosting their performance.
- Score: 44.90231337626545
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite great strides made on fine-grained visual classification (FGVC),
current methods are still heavily reliant on fully-supervised paradigms where
ample expert labels are called for. Semi-supervised learning (SSL) techniques,
acquiring knowledge from unlabeled data, provide a considerable means forward
and have shown great promise for coarse-grained problems. However, exiting SSL
paradigms mostly assume in-distribution (i.e., category-aligned) unlabeled
data, which hinders their effectiveness when re-proposed on FGVC. In this
paper, we put forward a novel design specifically aimed at making
out-of-distribution data work for semi-supervised FGVC, i.e., to "clue them
in". We work off an important assumption that all fine-grained categories
naturally follow a hierarchical structure (e.g., the phylogenetic tree of
"Aves" that covers all bird species). It follows that, instead of operating on
individual samples, we can instead predict sample relations within this tree
structure as the optimization goal of SSL. Beyond this, we further introduced
two strategies uniquely brought by these tree structures to achieve
inter-sample consistency regularization and reliable pseudo-relation. Our
experimental results reveal that (i) the proposed method yields good robustness
against out-of-distribution data, and (ii) it can be equipped with prior arts,
boosting their performance thus yielding state-of-the-art results. Code is
available at https://github.com/PRIS-CV/RelMatch.
Related papers
- Fake It Till Make It: Federated Learning with Consensus-Oriented
Generation [52.82176415223988]
We propose federated learning with consensus-oriented generation (FedCOG)
FedCOG consists of two key components at the client side: complementary data generation and knowledge-distillation-based model training.
Experiments on classical and real-world FL datasets show that FedCOG consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-12-10T18:49:59Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Confident Sinkhorn Allocation for Pseudo-Labeling [40.883130133661304]
Semi-supervised learning is a critical tool in reducing machine learning's dependence on labeled data.
This paper studies theoretically the role of uncertainty to pseudo-labeling and proposes Confident Sinkhorn Allocation (CSA)
CSA identifies the best pseudo-label allocation via optimal transport to only samples with high confidence scores.
arXiv Detail & Related papers (2022-06-13T02:16:26Z) - CAFE: Learning to Condense Dataset by Aligning Features [72.99394941348757]
We propose a novel scheme to Condense dataset by Aligning FEatures (CAFE)
At the heart of our approach is an effective strategy to align features from the real and synthetic data across various scales.
We validate the proposed CAFE across various datasets, and demonstrate that it generally outperforms the state of the art.
arXiv Detail & Related papers (2022-03-03T05:58:49Z) - Fine-Grained Adversarial Semi-supervised Learning [25.36956660025102]
We exploit Semi-Supervised Learning (SSL) to increase the amount of training data to improve the performance of Fine-Grained Visual Categorization (FGVC)
We demonstrate the effectiveness of the combined use by conducting experiments on six state-of-the-art fine-grained datasets.
arXiv Detail & Related papers (2021-10-12T09:24:22Z) - Trash to Treasure: Harvesting OOD Data with Cross-Modal Matching for
Open-Set Semi-Supervised Learning [101.28281124670647]
Open-set semi-supervised learning (open-set SSL) investigates a challenging but practical scenario where out-of-distribution (OOD) samples are contained in the unlabeled data.
We propose a novel training mechanism that could effectively exploit the presence of OOD data for enhanced feature learning.
Our approach substantially lifts the performance on open-set SSL and outperforms the state-of-the-art by a large margin.
arXiv Detail & Related papers (2021-08-12T09:14:44Z) - Generalizing Variational Autoencoders with Hierarchical Empirical Bayes [6.273154057349038]
We present Hierarchical Empirical Bayes Autoencoder (HEBAE), a computationally stable framework for probabilistic generative models.
Our key contributions are two-fold. First, we make gains by placing a hierarchical prior over the encoding distribution, enabling us to adaptively balance the trade-off between minimizing the reconstruction loss function and avoiding over-regularization.
arXiv Detail & Related papers (2020-07-20T18:18:39Z) - Solving Long-tailed Recognition with Deep Realistic Taxonomic Classifier [68.38233199030908]
Long-tail recognition tackles the natural non-uniformly distributed data in realworld scenarios.
While moderns perform well on populated classes, its performance degrades significantly on tail classes.
Deep-RTC is proposed as a new solution to the long-tail problem, combining realism with hierarchical predictions.
arXiv Detail & Related papers (2020-07-20T05:57:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.