An analysis of the transfer learning of convolutional neural networks
for artistic images
- URL: http://arxiv.org/abs/2011.02727v2
- Date: Tue, 24 Nov 2020 13:18:23 GMT
- Title: An analysis of the transfer learning of convolutional neural networks
for artistic images
- Authors: Nicolas Gonthier and Yann Gousseau and Sa\"id Ladjal
- Abstract summary: Transfer learning from huge natural image datasets has become de facto the core of art analysis applications.
In this paper, we first use techniques for visualizing the network internal representations in order to provide clues to the understanding of what the network has learned on artistic images.
We provide a quantitative analysis of the changes introduced by the learning process thanks to metrics in both the feature and parameter spaces, as well as metrics computed on the set of maximal activation images.
- Score: 1.9336815376402716
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transfer learning from huge natural image datasets, fine-tuning of deep
neural networks and the use of the corresponding pre-trained networks have
become de facto the core of art analysis applications. Nevertheless, the
effects of transfer learning are still poorly understood. In this paper, we
first use techniques for visualizing the network internal representations in
order to provide clues to the understanding of what the network has learned on
artistic images. Then, we provide a quantitative analysis of the changes
introduced by the learning process thanks to metrics in both the feature and
parameter spaces, as well as metrics computed on the set of maximal activation
images. These analyses are performed on several variations of the transfer
learning procedure. In particular, we observed that the network could
specialize some pre-trained filters to the new image modality and also that
higher layers tend to concentrate classes. Finally, we have shown that a double
fine-tuning involving a medium-size artistic dataset can improve the
classification on smaller datasets, even when the task changes.
Related papers
- Unleashing the Power of Depth and Pose Estimation Neural Networks by
Designing Compatible Endoscopic Images [12.412060445862842]
We conduct a detail analysis of the properties of endoscopic images and improve the compatibility of images and neural networks.
First, we introcude the Mask Image Modelling (MIM) module, which inputs partial image information instead of complete image information.
Second, we propose a lightweight neural network to enhance the endoscopic images, to explicitly improve the compatibility between images and neural networks.
arXiv Detail & Related papers (2023-09-14T02:19:38Z) - Is Deep Image Prior in Need of a Good Education? [57.3399060347311]
Deep image prior was introduced as an effective prior for image reconstruction.
Despite its impressive reconstructive properties, the approach is slow when compared to learned or traditional reconstruction techniques.
We develop a two-stage learning paradigm to address the computational challenge.
arXiv Detail & Related papers (2021-11-23T15:08:26Z) - Transferring Knowledge with Attention Distillation for Multi-Domain
Image-to-Image Translation [28.272982411879845]
We show how gradient-based attentions can be used as knowledge to be conveyed in a teacher-student paradigm for image-to-image translation tasks.
It is also demonstrated how "pseudo"-attentions can also be employed during training when teacher and student networks are trained on different domains.
arXiv Detail & Related papers (2021-08-17T06:47:04Z) - Joint Learning of Neural Transfer and Architecture Adaptation for Image
Recognition [77.95361323613147]
Current state-of-the-art visual recognition systems rely on pretraining a neural network on a large-scale dataset and finetuning the network weights on a smaller dataset.
In this work, we prove that dynamically adapting network architectures tailored for each domain task along with weight finetuning benefits in both efficiency and effectiveness.
Our method can be easily generalized to an unsupervised paradigm by replacing supernet training with self-supervised learning in the source domain tasks and performing linear evaluation in the downstream tasks.
arXiv Detail & Related papers (2021-03-31T08:15:17Z) - Learning Visual Representations for Transfer Learning by Suppressing
Texture [38.901410057407766]
In self-supervised learning, texture as a low-level cue may provide shortcuts that prevent the network from learning higher level representations.
We propose to use classic methods based on anisotropic diffusion to augment training using images with suppressed texture.
We empirically show that our method achieves state-of-the-art results on object detection and image classification.
arXiv Detail & Related papers (2020-11-03T18:27:03Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - On Robustness and Transferability of Convolutional Neural Networks [147.71743081671508]
Modern deep convolutional networks (CNNs) are often criticized for not generalizing under distributional shifts.
We study the interplay between out-of-distribution and transfer performance of modern image classification CNNs for the first time.
We find that increasing both the training set and model sizes significantly improve the distributional shift robustness.
arXiv Detail & Related papers (2020-07-16T18:39:04Z) - Visualizing Transfer Learning [0.0]
We provide visualizations of individual neurons of a deep image recognition network during the temporal process of transfer learning.
These visualizations qualitatively demonstrate various novel properties of the transfer learning process.
arXiv Detail & Related papers (2020-07-15T11:34:46Z) - Adversarially-Trained Deep Nets Transfer Better: Illustration on Image
Classification [53.735029033681435]
Transfer learning is a powerful methodology for adapting pre-trained deep neural networks on image recognition tasks to new domains.
In this work, we demonstrate that adversarially-trained models transfer better than non-adversarially-trained models.
arXiv Detail & Related papers (2020-07-11T22:48:42Z) - Curriculum By Smoothing [52.08553521577014]
Convolutional Neural Networks (CNNs) have shown impressive performance in computer vision tasks such as image classification, detection, and segmentation.
We propose an elegant curriculum based scheme that smoothes the feature embedding of a CNN using anti-aliasing or low-pass filters.
As the amount of information in the feature maps increases during training, the network is able to progressively learn better representations of the data.
arXiv Detail & Related papers (2020-03-03T07:27:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.