Federated Learning Over Images: Vertical Decompositions and Pre-Trained
Backbones Are Difficult to Beat
- URL: http://arxiv.org/abs/2309.03237v1
- Date: Wed, 6 Sep 2023 02:09:14 GMT
- Title: Federated Learning Over Images: Vertical Decompositions and Pre-Trained
Backbones Are Difficult to Beat
- Authors: Erdong Hu, Yuxin Tang, Anastasios Kyrillidis, Chris Jermaine
- Abstract summary: We evaluate a number of algorithms for learning in a federated environment.
We consider whether learning over data sets that do not have diverse sets of images affects the results.
We find that vertically decomposing a neural network seems to give the best results.
- Score: 17.30751773894676
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We carefully evaluate a number of algorithms for learning in a federated
environment, and test their utility for a variety of image classification
tasks. We consider many issues that have not been adequately considered before:
whether learning over data sets that do not have diverse sets of images affects
the results; whether to use a pre-trained feature extraction "backbone"; how to
evaluate learner performance (we argue that classification accuracy is not
enough), among others. Overall, across a wide variety of settings, we find that
vertically decomposing a neural network seems to give the best results, and
outperforms more standard reconciliation-used methods.
Related papers
- Synergy and Diversity in CLIP: Enhancing Performance Through Adaptive Backbone Ensembling [58.50618448027103]
Contrastive Language-Image Pretraining (CLIP) stands out as a prominent method for image representation learning.
This paper explores the differences across various CLIP-trained vision backbones.
Method achieves a remarkable increase in accuracy of up to 39.1% over the best single backbone.
arXiv Detail & Related papers (2024-05-27T12:59:35Z) - Mix-up Self-Supervised Learning for Contrast-agnostic Applications [33.807005669824136]
We present the first mix-up self-supervised learning framework for contrast-agnostic applications.
We address the low variance across images based on cross-domain mix-up and build the pretext task based on image reconstruction and transparency prediction.
arXiv Detail & Related papers (2022-04-02T16:58:36Z) - Constrained Deep One-Class Feature Learning For Classifying Imbalanced
Medical Images [4.211466076086617]
One-class classification has attracted increasing attention to address the data imbalance problem.
We propose a novel deep learning-based method to learn compact features.
Our method can learn more relevant features associated with the given class, making the majority and minority samples more distinguishable.
arXiv Detail & Related papers (2021-11-20T15:25:24Z) - LibFewShot: A Comprehensive Library for Few-shot Learning [78.58842209282724]
Few-shot learning, especially few-shot image classification, has received increasing attention and witnessed significant advances in recent years.
Some recent studies implicitly show that many generic techniques or tricks, such as data augmentation, pre-training, knowledge distillation, and self-supervision, may greatly boost the performance of a few-shot learning method.
We propose a comprehensive library for few-shot learning (LibFewShot) by re-implementing seventeen state-of-the-art few-shot learning methods in a unified framework with the same single intrinsic in PyTorch.
arXiv Detail & Related papers (2021-09-10T14:12:37Z) - Multi-Label Image Classification with Contrastive Learning [57.47567461616912]
We show that a direct application of contrastive learning can hardly improve in multi-label cases.
We propose a novel framework for multi-label classification with contrastive learning in a fully supervised setting.
arXiv Detail & Related papers (2021-07-24T15:00:47Z) - Boosting few-shot classification with view-learnable contrastive
learning [19.801016732390064]
We introduce contrastive loss into few-shot classification for learning latent fine-grained structure in the embedding space.
We develop a learning-to-learn algorithm to automatically generate different views of the same image.
arXiv Detail & Related papers (2021-07-20T03:13:33Z) - Few-Shot Learning with Part Discovery and Augmentation from Unlabeled
Images [79.34600869202373]
We show that inductive bias can be learned from a flat collection of unlabeled images, and instantiated as transferable representations among seen and unseen classes.
Specifically, we propose a novel part-based self-supervised representation learning scheme to learn transferable representations.
Our method yields impressive results, outperforming the previous best unsupervised methods by 7.74% and 9.24%.
arXiv Detail & Related papers (2021-05-25T12:22:11Z) - Unifying Remote Sensing Image Retrieval and Classification with Robust
Fine-tuning [3.6526118822907594]
We aim at unifying remote sensing image retrieval and classification with a new large-scale training and testing dataset, SF300.
We show that our framework systematically achieves a boost of retrieval and classification performance on nine different datasets compared to an ImageNet pretrained baseline.
arXiv Detail & Related papers (2021-02-26T11:01:30Z) - Grafit: Learning fine-grained image representations with coarse labels [114.17782143848315]
This paper tackles the problem of learning a finer representation than the one provided by training labels.
By jointly leveraging the coarse labels and the underlying fine-grained latent space, it significantly improves the accuracy of category-level retrieval methods.
arXiv Detail & Related papers (2020-11-25T19:06:26Z) - Impact of base dataset design on few-shot image classification [33.31817928613412]
We systematically study the effect of variations in the training data by evaluating deep features trained on different image sets in a few-shot classification setting.
We show how the base dataset design can improve performance in few-shot classification more drastically than replacing a simple baseline by an advanced state of the art algorithm.
arXiv Detail & Related papers (2020-07-17T09:58:50Z) - Distilling Localization for Self-Supervised Representation Learning [82.79808902674282]
Contrastive learning has revolutionized unsupervised representation learning.
Current contrastive models are ineffective at localizing the foreground object.
We propose a data-driven approach for learning in variance to backgrounds.
arXiv Detail & Related papers (2020-04-14T16:29:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.