Fine-Grained Adversarial Semi-supervised Learning
- URL: http://arxiv.org/abs/2110.05848v1
- Date: Tue, 12 Oct 2021 09:24:22 GMT
- Title: Fine-Grained Adversarial Semi-supervised Learning
- Authors: Daniele Mugnai, Federico Pernici, Francesco Turchini, Alberto Del
Bimbo
- Abstract summary: We exploit Semi-Supervised Learning (SSL) to increase the amount of training data to improve the performance of Fine-Grained Visual Categorization (FGVC)
We demonstrate the effectiveness of the combined use by conducting experiments on six state-of-the-art fine-grained datasets.
- Score: 25.36956660025102
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we exploit Semi-Supervised Learning (SSL) to increase the
amount of training data to improve the performance of Fine-Grained Visual
Categorization (FGVC). This problem has not been investigated in the past in
spite of prohibitive annotation costs that FGVC requires. Our approach
leverages unlabeled data with an adversarial optimization strategy in which the
internal features representation is obtained with a second-order pooling model.
This combination allows to back-propagate the information of the parts,
represented by second-order pooling, onto unlabeled data in an adversarial
training setting. We demonstrate the effectiveness of the combined use by
conducting experiments on six state-of-the-art fine-grained datasets, which
include Aircrafts, Stanford Cars, CUB-200-2011, Oxford Flowers, Stanford Dogs,
and the recent Semi-Supervised iNaturalist-Aves. Experimental results clearly
show that our proposed method has better performance than the only previous
approach that examined this problem; it also obtained higher classification
accuracy with respect to the supervised learning methods with which we
compared.
Related papers
- A Unified Contrastive Loss for Self-Training [3.3454373538792552]
Self-training methods have proven to be effective in exploiting abundant unlabeled data in semi-supervised learning.
We propose a general framework to enhance self-training methods, which replaces all instances of CE losses with a unique contrastive loss.
Our framework results in significant performance improvements across three different datasets with a limited number of labeled data.
arXiv Detail & Related papers (2024-09-11T14:22:41Z) - Incremental Self-training for Semi-supervised Learning [56.57057576885672]
IST is simple yet effective and fits existing self-training-based semi-supervised learning methods.
We verify the proposed IST on five datasets and two types of backbone, effectively improving the recognition accuracy and learning speed.
arXiv Detail & Related papers (2024-04-14T05:02:00Z) - Noisy Self-Training with Synthetic Queries for Dense Retrieval [49.49928764695172]
We introduce a novel noisy self-training framework combined with synthetic queries.
Experimental results show that our method improves consistently over existing methods.
Our method is data efficient and outperforms competitive baselines.
arXiv Detail & Related papers (2023-11-27T06:19:50Z) - OTMatch: Improving Semi-Supervised Learning with Optimal Transport [2.4355694259330467]
We present a new approach called OTMatch, which leverages semantic relationships among classes by employing an optimal transport loss function to match distributions.
The empirical results show improvements in our method above baseline, this demonstrates the effectiveness and superiority of our approach in harnessing semantic relationships to enhance learning performance in a semi-supervised setting.
arXiv Detail & Related papers (2023-10-26T15:01:54Z) - Robust Learning with Progressive Data Expansion Against Spurious
Correlation [65.83104529677234]
We study the learning process of a two-layer nonlinear convolutional neural network in the presence of spurious features.
Our analysis suggests that imbalanced data groups and easily learnable spurious features can lead to the dominance of spurious features during the learning process.
We propose a new training algorithm called PDE that efficiently enhances the model's robustness for a better worst-group performance.
arXiv Detail & Related papers (2023-06-08T05:44:06Z) - Deep Active Ensemble Sampling For Image Classification [8.31483061185317]
Active learning frameworks aim to reduce the cost of data annotation by actively requesting the labeling for the most informative data points.
Some proposed approaches include uncertainty-based techniques, geometric methods, implicit combination of uncertainty-based and geometric approaches.
We present an innovative integration of recent progress in both uncertainty-based and geometric frameworks to enable an efficient exploration/exploitation trade-off in sample selection strategy.
Our framework provides two advantages: (1) accurate posterior estimation, and (2) tune-able trade-off between computational overhead and higher accuracy.
arXiv Detail & Related papers (2022-10-11T20:20:20Z) - Explored An Effective Methodology for Fine-Grained Snake Recognition [8.908667065576632]
We design a strong multimodal backbone to utilize various meta-information to assist in fine-grained identification.
In order to take full advantage of unlabeled datasets, we use self-supervised learning and supervised learning joint training.
Our method can achieve a macro f1 score 92.7% and 89.4% on private and public dataset, respectively, which is the 1st place among the participators on private leaderboard.
arXiv Detail & Related papers (2022-07-24T02:19:15Z) - Boosting Facial Expression Recognition by A Semi-Supervised Progressive
Teacher [54.50747989860957]
We propose a semi-supervised learning algorithm named Progressive Teacher (PT) to utilize reliable FER datasets as well as large-scale unlabeled expression images for effective training.
Experiments on widely-used databases RAF-DB and FERPlus validate the effectiveness of our method, which achieves state-of-the-art performance with accuracy of 89.57% on RAF-DB.
arXiv Detail & Related papers (2022-05-28T07:47:53Z) - Adversarial Dual-Student with Differentiable Spatial Warping for
Semi-Supervised Semantic Segmentation [70.2166826794421]
We propose a differentiable geometric warping to conduct unsupervised data augmentation.
We also propose a novel adversarial dual-student framework to improve the Mean-Teacher.
Our solution significantly improves the performance and state-of-the-art results are achieved on both datasets.
arXiv Detail & Related papers (2022-03-05T17:36:17Z) - DEALIO: Data-Efficient Adversarial Learning for Imitation from
Observation [57.358212277226315]
In imitation learning from observation IfO, a learning agent seeks to imitate a demonstrating agent using only observations of the demonstrated behavior without access to the control signals generated by the demonstrator.
Recent methods based on adversarial imitation learning have led to state-of-the-art performance on IfO problems, but they typically suffer from high sample complexity due to a reliance on data-inefficient, model-free reinforcement learning algorithms.
This issue makes them impractical to deploy in real-world settings, where gathering samples can incur high costs in terms of time, energy, and risk.
We propose a more data-efficient IfO algorithm
arXiv Detail & Related papers (2021-03-31T23:46:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.