A Deep Dive into Adversarial Robustness in Zero-Shot Learning
- URL: http://arxiv.org/abs/2008.07651v1
- Date: Mon, 17 Aug 2020 22:26:06 GMT
- Title: A Deep Dive into Adversarial Robustness in Zero-Shot Learning
- Authors: Mehmet Kerim Yucel, Ramazan Gokberk Cinbis, Pinar Duygulu
- Abstract summary: We present a study aimed on evaluating the adversarial robustness of Zero-shot Learning (ZSL) and Generalized Zero-shot Learning (GZSL) models.
In addition to creating possibly the first benchmark on adversarial robustness of ZSL models, we also present analyses on important points that require attention for better interpretation of ZSL robustness results.
- Score: 9.62543698736491
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning (ML) systems have introduced significant advances in various
fields, due to the introduction of highly complex models. Despite their
success, it has been shown multiple times that machine learning models are
prone to imperceptible perturbations that can severely degrade their accuracy.
So far, existing studies have primarily focused on models where supervision
across all classes were available. In constrast, Zero-shot Learning (ZSL) and
Generalized Zero-shot Learning (GZSL) tasks inherently lack supervision across
all classes. In this paper, we present a study aimed on evaluating the
adversarial robustness of ZSL and GZSL models. We leverage the well-established
label embedding model and subject it to a set of established adversarial
attacks and defenses across multiple datasets. In addition to creating possibly
the first benchmark on adversarial robustness of ZSL models, we also present
analyses on important points that require attention for better interpretation
of ZSL robustness results. We hope these points, along with the benchmark, will
help researchers establish a better understanding what challenges lie ahead and
help guide their work.
Related papers
- A Survey of the Self Supervised Learning Mechanisms for Vision Transformers [5.152455218955949]
The application of self supervised learning (SSL) in vision tasks has gained significant attention.
We develop a comprehensive taxonomy of systematically classifying the SSL techniques.
We discuss the motivations behind SSL, review popular pre-training tasks, and highlight the challenges and advancements in this field.
arXiv Detail & Related papers (2024-08-30T07:38:28Z) - Fine-Grained Zero-Shot Learning: Advances, Challenges, and Prospects [84.36935309169567]
We present a broad review of recent advances for fine-grained analysis in zero-shot learning (ZSL)
We first provide a taxonomy of existing methods and techniques with a thorough analysis of each category.
Then, we summarize the benchmark, covering publicly available datasets, models, implementations, and some more details as a library.
arXiv Detail & Related papers (2024-01-31T11:51:24Z) - Benchmark for Uncertainty & Robustness in Self-Supervised Learning [0.0]
Self-Supervised Learning is crucial for real-world applications, especially in data-hungry domains such as healthcare and self-driving cars.
In this paper, we explore variants of SSL methods, including Jigsaw Puzzles, Context, Rotation, Geometric Transformations Prediction for vision, as well as BERT and GPT for language tasks.
Our goal is to create a benchmark with outputs from experiments, providing a starting point for new SSL methods in Reliable Machine Learning.
arXiv Detail & Related papers (2022-12-23T15:46:23Z) - Federated Zero-Shot Learning for Visual Recognition [55.65879596326147]
We propose a novel Federated Zero-Shot Learning FedZSL framework.
FedZSL learns a central model from the decentralized data residing on edge devices.
The effectiveness and robustness of FedZSL are demonstrated by extensive experiments conducted on three zero-shot benchmark datasets.
arXiv Detail & Related papers (2022-09-05T14:49:34Z) - Decoupled Adversarial Contrastive Learning for Self-supervised
Adversarial Robustness [69.39073806630583]
Adversarial training (AT) for robust representation learning and self-supervised learning (SSL) for unsupervised representation learning are two active research fields.
We propose a two-stage framework termed Decoupled Adversarial Contrastive Learning (DeACL)
arXiv Detail & Related papers (2022-07-22T06:30:44Z) - On Higher Adversarial Susceptibility of Contrastive Self-Supervised
Learning [104.00264962878956]
Contrastive self-supervised learning (CSL) has managed to match or surpass the performance of supervised learning in image and video classification.
It is still largely unknown if the nature of the representation induced by the two learning paradigms is similar.
We identify the uniform distribution of data representation over a unit hypersphere in the CSL representation space as the key contributor to this phenomenon.
We devise strategies that are simple, yet effective in improving model robustness with CSL training.
arXiv Detail & Related papers (2022-07-22T03:49:50Z) - How Robust are Discriminatively Trained Zero-Shot Learning Models? [9.62543698736491]
We present novel analyses on the robustness of discriminative ZSL to image corruptions.
We release the first ZSL corruption robustness datasets SUN-C, CUB-C and AWA2-C.
arXiv Detail & Related papers (2022-01-26T14:41:10Z) - Self-supervised Learning is More Robust to Dataset Imbalance [65.84339596595383]
We investigate self-supervised learning under dataset imbalance.
Off-the-shelf self-supervised representations are already more robust to class imbalance than supervised representations.
We devise a re-weighted regularization technique that consistently improves the SSL representation quality on imbalanced datasets.
arXiv Detail & Related papers (2021-10-11T06:29:56Z) - Dynamic VAEs with Generative Replay for Continual Zero-shot Learning [1.90365714903665]
This paper proposes a novel continual zero-shot learning (DVGR-CZSL) model that grows in size with each task and uses generative replay to update itself with previously learned classes to avoid forgetting.
We show our method is superior in task sequentially learning with ZSL(Zero-Shot Learning)
arXiv Detail & Related papers (2021-04-26T10:56:43Z) - Meta-Learned Attribute Self-Gating for Continual Generalized Zero-Shot
Learning [82.07273754143547]
We propose a meta-continual zero-shot learning (MCZSL) approach to generalizing a model to categories unseen during training.
By pairing self-gating of attributes and scaled class normalization with meta-learning based training, we are able to outperform state-of-the-art results.
arXiv Detail & Related papers (2021-02-23T18:36:14Z) - A Review of Generalized Zero-Shot Learning Methods [31.539434340951786]
Generalized zero-shot learning (GZSL) aims to train a model for classifying data samples under the condition that some output classes are unknown during supervised learning.
GZSL leverages semantic information of the seen (source) and unseen (target) classes to bridge the gap between both seen and unseen classes.
arXiv Detail & Related papers (2020-11-17T14:00:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.