ScaleNet: An Unsupervised Representation Learning Method for Limited
Information
- URL: http://arxiv.org/abs/2310.02386v1
- Date: Tue, 3 Oct 2023 19:13:43 GMT
- Title: ScaleNet: An Unsupervised Representation Learning Method for Limited
Information
- Authors: Huili Huang, M. Mahdi Roozbahani
- Abstract summary: A simple and efficient unsupervised representation learning method named ScaleNet is proposed.
Specific image features, such as Harris corner information, play a critical role in the efficiency of the rotation-prediction task.
The transferred parameters from a ScaleNet model with limited data improve the ImageNet Classification task by about 6% compared to the RotNet model.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although large-scale labeled data are essential for deep convolutional neural
networks (ConvNets) to learn high-level semantic visual representations, it is
time-consuming and impractical to collect and annotate large-scale datasets. A
simple and efficient unsupervised representation learning method named ScaleNet
based on multi-scale images is proposed in this study to enhance the
performance of ConvNets when limited information is available. The input images
are first resized to a smaller size and fed to the ConvNet to recognize the
rotation degree. Next, the ConvNet learns the rotation-prediction task for the
original size images based on the parameters transferred from the previous
model. The CIFAR-10 and ImageNet datasets are examined on different
architectures such as AlexNet and ResNet50 in this study. The current study
demonstrates that specific image features, such as Harris corner information,
play a critical role in the efficiency of the rotation-prediction task. The
ScaleNet supersedes the RotNet by ~7% in the limited CIFAR-10 dataset. The
transferred parameters from a ScaleNet model with limited data improve the
ImageNet Classification task by about 6% compared to the RotNet model. This
study shows the capability of the ScaleNet method to improve other cutting-edge
models such as SimCLR by learning effective features for classification tasks.
Related papers
- Towards Scalable and Versatile Weight Space Learning [51.78426981947659]
This paper introduces the SANE approach to weight-space learning.
Our method extends the idea of hyper-representations towards sequential processing of subsets of neural network weights.
arXiv Detail & Related papers (2024-06-14T13:12:07Z) - Learning to Generate Parameters of ConvNets for Unseen Image Data [36.68392191824203]
ConvNets depend heavily on large amounts of image data and resort to an iterative optimization algorithm to learn network parameters.
We propose a new training paradigm and formulate the parameter learning of ConvNets into a prediction task.
We show that our proposed method achieves good efficacy for unseen image datasets on two kinds of settings.
arXiv Detail & Related papers (2023-10-18T10:26:18Z) - Impact of Scaled Image on Robustness of Deep Neural Networks [0.0]
Scaling the raw images creates out-of-distribution data, which makes it a possible adversarial attack to fool the networks.
In this work, we propose a Scaling-distortion dataset ImageNet-CS by Scaling a subset of the ImageNet Challenge dataset by different multiples.
arXiv Detail & Related papers (2022-09-02T08:06:58Z) - Efficient deep learning models for land cover image classification [0.29748898344267777]
This work experiments with the BigEarthNet dataset for land use land cover (LULC) image classification.
We benchmark different state-of-the-art models, including Convolution Neural Networks, Multi-Layer Perceptrons, Visual Transformers, EfficientNets and Wide Residual Networks (WRN)
Our proposed lightweight model has an order of magnitude less trainable parameters, achieves 4.5% higher averaged f-score classification accuracy for all 19 LULC classes and is trained two times faster with respect to a ResNet50 state-of-the-art model that we use as a baseline.
arXiv Detail & Related papers (2021-11-18T00:03:14Z) - Network Augmentation for Tiny Deep Learning [73.57192520534585]
We introduce Network Augmentation (NetAug), a new training method for improving the performance of tiny neural networks.
We demonstrate the effectiveness of NetAug on image classification and object detection.
arXiv Detail & Related papers (2021-10-17T18:48:41Z) - Learnable Expansion-and-Compression Network for Few-shot
Class-Incremental Learning [87.94561000910707]
We propose a learnable expansion-and-compression network (LEC-Net) to solve catastrophic forgetting and model over-fitting problems.
LEC-Net enlarges the representation capacity of features, alleviating feature drift of old network from the perspective of model regularization.
Experiments on the CUB/CIFAR-100 datasets show that LEC-Net improves the baseline by 57% while outperforms the state-of-the-art by 56%.
arXiv Detail & Related papers (2021-04-06T04:34:21Z) - Revisiting ResNets: Improved Training and Scaling Strategies [54.0162571976267]
Training and scaling strategies may matter more than architectural changes, and the resulting ResNets match recent state-of-the-art models.
We show that the best performing scaling strategy depends on the training regime.
We design a family of ResNet architectures, ResNet-RS, which are 1.7x - 2.7x faster than EfficientNets on TPUs, while achieving similar accuracies on ImageNet.
arXiv Detail & Related papers (2021-03-13T00:18:19Z) - Dynamic Graph: Learning Instance-aware Connectivity for Neural Networks [78.65792427542672]
Dynamic Graph Network (DG-Net) is a complete directed acyclic graph, where the nodes represent convolutional blocks and the edges represent connection paths.
Instead of using the same path of the network, DG-Net aggregates features dynamically in each node, which allows the network to have more representation ability.
arXiv Detail & Related papers (2020-10-02T16:50:26Z) - Multi-task pre-training of deep neural networks for digital pathology [8.74883469030132]
We first assemble and transform many digital pathology datasets into a pool of 22 classification tasks and almost 900k images.
We show that our models used as feature extractors either improve significantly over ImageNet pre-trained models or provide comparable performance.
arXiv Detail & Related papers (2020-05-05T08:50:17Z) - DRU-net: An Efficient Deep Convolutional Neural Network for Medical
Image Segmentation [2.3574651879602215]
Residual network (ResNet) and densely connected network (DenseNet) have significantly improved the training efficiency and performance of deep convolutional neural networks (DCNNs)
We propose an efficient network architecture by considering advantages of both networks.
arXiv Detail & Related papers (2020-04-28T12:16:24Z) - Improved Residual Networks for Image and Video Recognition [98.10703825716142]
Residual networks (ResNets) represent a powerful type of convolutional neural network (CNN) architecture.
We show consistent improvements in accuracy and learning convergence over the baseline.
Our proposed approach allows us to train extremely deep networks, while the baseline shows severe optimization issues.
arXiv Detail & Related papers (2020-04-10T11:09:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.