Impact of ImageNet Model Selection on Domain Adaptation
- URL: http://arxiv.org/abs/2002.02559v1
- Date: Thu, 6 Feb 2020 23:58:23 GMT
- Title: Impact of ImageNet Model Selection on Domain Adaptation
- Authors: Youshan Zhang and Brian D. Davison
- Abstract summary: We investigate how different ImageNet models affect transfer accuracy on domain adaptation problems.
A higher accuracy ImageNet model produces better features, and leads to higher accuracy on domain adaptation problems.
We also examine the architecture of each neural network to find the best layer for feature extraction.
- Score: 26.016647703500883
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks are widely used in image classification problems.
However, little work addresses how features from different deep neural networks
affect the domain adaptation problem. Existing methods often extract deep
features from one ImageNet model, without exploring other neural networks. In
this paper, we investigate how different ImageNet models affect transfer
accuracy on domain adaptation problems. We extract features from sixteen
distinct pre-trained ImageNet models and examine the performance of twelve
benchmarking methods when using the features. Extensive experimental results
show that a higher accuracy ImageNet model produces better features, and leads
to higher accuracy on domain adaptation problems (with a correlation
coefficient of up to 0.95). We also examine the architecture of each neural
network to find the best layer for feature extraction. Together, performance
from our features exceeds that of the state-of-the-art in three benchmark
datasets.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Adaptive Convolutional Neural Network for Image Super-resolution [43.06377001247278]
We propose a adaptive convolutional neural network for image super-resolution (ADSRNet)
The upper network can enhance relation of context information, salient information relation of a kernel mapping and relations of shallow and deep layers.
The lower network utilizes a symmetric architecture to enhance relations of different layers to mine more structural information.
arXiv Detail & Related papers (2024-02-24T03:44:06Z) - ImageNet-E: Benchmarking Neural Network Robustness via Attribute Editing [45.14977000707886]
Higher accuracy on ImageNet usually leads to better robustness against different corruptions.
We create a toolkit for object editing with controls of backgrounds, sizes, positions, and directions.
We evaluate the performance of current deep learning models, including both convolutional neural networks and vision transformers.
arXiv Detail & Related papers (2023-03-30T02:02:32Z) - DeepDC: Deep Distance Correlation as a Perceptual Image Quality
Evaluator [53.57431705309919]
ImageNet pre-trained deep neural networks (DNNs) show notable transferability for building effective image quality assessment (IQA) models.
We develop a novel full-reference IQA (FR-IQA) model based exclusively on pre-trained DNN features.
We conduct comprehensive experiments to demonstrate the superiority of the proposed quality model on five standard IQA datasets.
arXiv Detail & Related papers (2022-11-09T14:57:27Z) - Multilayer deep feature extraction for visual texture recognition [0.0]
This paper is focused on improving the accuracy of convolutional neural networks in texture classification.
It is done by extracting features from multiple convolutional layers of a pretrained neural network and aggregating such features using Fisher vector.
We verify the effectiveness of our method on texture classification of benchmark datasets, as well as on a practical task of Brazilian plant species identification.
arXiv Detail & Related papers (2022-08-22T03:53:43Z) - FuNNscope: Visual microscope for interactively exploring the loss
landscape of fully connected neural networks [77.34726150561087]
We show how to explore high-dimensional landscape characteristics of neural networks.
We generalize observations on small neural networks to more complex systems.
An interactive dashboard opens up a number of possible application networks.
arXiv Detail & Related papers (2022-04-09T16:41:53Z) - Core Risk Minimization using Salient ImageNet [53.616101711801484]
We introduce the Salient Imagenet dataset with more than 1 million soft masks localizing core and spurious features for all 1000 Imagenet classes.
Using this dataset, we first evaluate the reliance of several Imagenet pretrained models (42 total) on spurious features.
Next, we introduce a new learning paradigm called Core Risk Minimization (CoRM) whose objective ensures that the model predicts a class using its core features.
arXiv Detail & Related papers (2022-03-28T01:53:34Z) - Glance and Focus: a Dynamic Approach to Reducing Spatial Redundancy in
Image Classification [46.885260723836865]
Deep convolutional neural networks (CNNs) generally improve when fueled with high resolution images.
Inspired by the fact that not all regions in an image are task-relevant, we propose a novel framework that performs efficient image classification.
Our framework is general and flexible as it is compatible with most of the state-of-the-art light-weighted CNNs.
arXiv Detail & Related papers (2020-10-11T17:55:06Z) - Beyond Dropout: Feature Map Distortion to Regularize Deep Neural
Networks [107.77595511218429]
In this paper, we investigate the empirical Rademacher complexity related to intermediate layers of deep neural networks.
We propose a feature distortion method (Disout) for addressing the aforementioned problem.
The superiority of the proposed feature map distortion for producing deep neural network with higher testing performance is analyzed and demonstrated.
arXiv Detail & Related papers (2020-02-23T13:59:13Z) - Improving the Resolution of CNN Feature Maps Efficiently with
Multisampling [8.655380837944188]
One version of our method, which we call subsampling, significantly improves the accuracy of state-of-the-art architectures such as DenseNet and ResNet without any additional parameters.
We glean possible insight into the nature of data augmentations and demonstrate experimentally that coarse feature maps are bottlenecking the performance of neural networks in image classification.
arXiv Detail & Related papers (2018-05-28T04:29:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.