How Robust are Discriminatively Trained Zero-Shot Learning Models?
- URL: http://arxiv.org/abs/2201.10972v2
- Date: Thu, 27 Jan 2022 08:55:00 GMT
- Title: How Robust are Discriminatively Trained Zero-Shot Learning Models?
- Authors: Mehmet Kerim Yucel, Ramazan Gokberk Cinbis, Pinar Duygulu
- Abstract summary: We present novel analyses on the robustness of discriminative ZSL to image corruptions.
We release the first ZSL corruption robustness datasets SUN-C, CUB-C and AWA2-C.
- Score: 9.62543698736491
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data shift robustness has been primarily investigated from a fully supervised
perspective, and robustness of zero-shot learning (ZSL) models have been
largely neglected. In this paper, we present novel analyses on the robustness
of discriminative ZSL to image corruptions. We subject several ZSL models to a
large set of common corruptions and defenses. In order to realize the
corruption analysis, we curate and release the first ZSL corruption robustness
datasets SUN-C, CUB-C and AWA2-C. We analyse our results by taking into account
the dataset characteristics, class imbalance, class transitions between seen
and unseen classes and the discrepancies between ZSL and GZSL performances. Our
results show that discriminative ZSL suffers from corruptions and this trend is
further exacerbated by the severe class imbalance and model weakness inherent
in ZSL methods. We then combine our findings with those based on adversarial
attacks in ZSL, and highlight the different effects of corruptions and
adversarial examples, such as the pseudo-robustness effect present under
adversarial attacks. We also obtain new strong baselines for both models with
the defense methods. Finally, our experiments show that although existing
methods to improve robustness somewhat work for ZSL models, they do not produce
a tangible effect.
Related papers
- Self-Supervised Anomaly Detection in the Wild: Favor Joint Embeddings Methods [12.277762115388187]
Self-Supervised Learning (SSL) offers a promising approach by learning robust representations from unlabeled data.
This paper provides a comprehensive evaluation of SSL methods for real-world anomaly detection, focusing on sewer infrastructure.
arXiv Detail & Related papers (2024-10-05T21:27:47Z) - On the Discriminability of Self-Supervised Representation Learning [38.598160031349686]
Self-supervised learning (SSL) has recently achieved significant success in downstream visual tasks.
A notable gap still exists between SSL and supervised learning (SL), especially in complex downstream tasks.
arXiv Detail & Related papers (2024-07-18T14:18:03Z) - Zero-Shot Learning by Harnessing Adversarial Samples [52.09717785644816]
We propose a novel Zero-Shot Learning (ZSL) approach by Harnessing Adversarial Samples (HAS)
HAS advances ZSL through adversarial training which takes into account three crucial aspects.
We demonstrate the effectiveness of our adversarial samples approach in both ZSL and Generalized Zero-Shot Learning (GZSL) scenarios.
arXiv Detail & Related papers (2023-08-01T06:19:13Z) - Federated Zero-Shot Learning for Visual Recognition [55.65879596326147]
We propose a novel Federated Zero-Shot Learning FedZSL framework.
FedZSL learns a central model from the decentralized data residing on edge devices.
The effectiveness and robustness of FedZSL are demonstrated by extensive experiments conducted on three zero-shot benchmark datasets.
arXiv Detail & Related papers (2022-09-05T14:49:34Z) - On Higher Adversarial Susceptibility of Contrastive Self-Supervised
Learning [104.00264962878956]
Contrastive self-supervised learning (CSL) has managed to match or surpass the performance of supervised learning in image and video classification.
It is still largely unknown if the nature of the representation induced by the two learning paradigms is similar.
We identify the uniform distribution of data representation over a unit hypersphere in the CSL representation space as the key contributor to this phenomenon.
We devise strategies that are simple, yet effective in improving model robustness with CSL training.
arXiv Detail & Related papers (2022-07-22T03:49:50Z) - Robust Deep Semi-Supervised Learning: A Brief Introduction [63.09703308309176]
Semi-supervised learning (SSL) aims to improve learning performance by leveraging unlabeled data when labels are insufficient.
SSL with deep models has proven to be successful on standard benchmark tasks.
However, they are still vulnerable to various robustness threats in real-world applications.
arXiv Detail & Related papers (2022-02-12T04:16:41Z) - Self-supervised Learning is More Robust to Dataset Imbalance [65.84339596595383]
We investigate self-supervised learning under dataset imbalance.
Off-the-shelf self-supervised representations are already more robust to class imbalance than supervised representations.
We devise a re-weighted regularization technique that consistently improves the SSL representation quality on imbalanced datasets.
arXiv Detail & Related papers (2021-10-11T06:29:56Z) - End-to-end Generative Zero-shot Learning via Few-shot Learning [76.9964261884635]
State-of-the-art approaches to Zero-Shot Learning (ZSL) train generative nets to synthesize examples conditioned on the provided metadata.
We introduce an end-to-end generative ZSL framework that uses such an approach as a backbone and feeds its synthesized output to a Few-Shot Learning algorithm.
arXiv Detail & Related papers (2021-02-08T17:35:37Z) - A Deep Dive into Adversarial Robustness in Zero-Shot Learning [9.62543698736491]
We present a study aimed on evaluating the adversarial robustness of Zero-shot Learning (ZSL) and Generalized Zero-shot Learning (GZSL) models.
In addition to creating possibly the first benchmark on adversarial robustness of ZSL models, we also present analyses on important points that require attention for better interpretation of ZSL robustness results.
arXiv Detail & Related papers (2020-08-17T22:26:06Z) - Semi-supervised learning objectives as log-likelihoods in a generative
model of data curation [32.45282187405337]
We formulate SSL objectives as a log-likelihood in a generative model of data curation.
We give a proof-of-principle for Bayesian SSL on toy data.
arXiv Detail & Related papers (2020-08-13T13:50:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.