Collaborative Learning for Faster StyleGAN Embedding
- URL: http://arxiv.org/abs/2007.01758v1
- Date: Fri, 3 Jul 2020 15:27:37 GMT
- Title: Collaborative Learning for Faster StyleGAN Embedding
- Authors: Shanyan Guan, Ying Tai, Bingbing Ni, Feida Zhu, Feiyue Huang, Xiaokang
Yang
- Abstract summary: We propose a novel collaborative learning framework that consists of an efficient embedding network and an optimization-based iterator.
High-quality latent code can be obtained efficiently with a single forward pass through our embedding network.
- Score: 127.84690280196597
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The latent code of the recent popular model StyleGAN has learned disentangled
representations thanks to the multi-layer style-based generator. Embedding a
given image back to the latent space of StyleGAN enables wide interesting
semantic image editing applications. Although previous works are able to yield
impressive inversion results based on an optimization framework, which however
suffers from the efficiency issue. In this work, we propose a novel
collaborative learning framework that consists of an efficient embedding
network and an optimization-based iterator. On one hand, with the progress of
training, the embedding network gives a reasonable latent code initialization
for the iterator. On the other hand, the updated latent code from the iterator
in turn supervises the embedding network. In the end, high-quality latent code
can be obtained efficiently with a single forward pass through our embedding
network. Extensive experiments demonstrate the effectiveness and efficiency of
our work.
Related papers
- Feather: An Elegant Solution to Effective DNN Sparsification [26.621226121259372]
Feather is an efficient sparse training module utilizing the powerful Straight-Through Estimator as its core.
It achieves state-of-the-art Top-1 validation accuracy using the ResNet-50 architecture.
arXiv Detail & Related papers (2023-10-03T21:37:13Z) - Expediting Building Footprint Extraction from High-resolution Remote Sensing Images via progressive lenient supervision [38.46970858582502]
Building footprint segmentation from remotely sensed images has been hindered by model transfer effectiveness.
We propose an efficient framework denoted as BFSeg to enhance learning efficiency and effectiveness.
Specifically, a densely-connected coarse-to-fine feature fusion decoder network that facilitates easy and fast feature fusion across scales.
arXiv Detail & Related papers (2023-07-23T03:55:13Z) - Dynamic Perceiver for Efficient Visual Recognition [87.08210214417309]
We propose Dynamic Perceiver (Dyn-Perceiver) to decouple the feature extraction procedure and the early classification task.
A feature branch serves to extract image features, while a classification branch processes a latent code assigned for classification tasks.
Early exits are placed exclusively within the classification branch, thus eliminating the need for linear separability in low-level features.
arXiv Detail & Related papers (2023-06-20T03:00:22Z) - Layer Collaboration in the Forward-Forward Algorithm [28.856139738073626]
We study layer collaboration in the forward-forward algorithm.
We show that the current version of the forward-forward algorithm is suboptimal when considering information flow in the network.
We propose an improved version that supports layer collaboration to better utilize the network structure.
arXiv Detail & Related papers (2023-05-21T08:12:54Z) - Pushing the Efficiency Limit Using Structured Sparse Convolutions [82.31130122200578]
We propose Structured Sparse Convolution (SSC), which leverages the inherent structure in images to reduce the parameters in the convolutional filter.
We show that SSC is a generalization of commonly used layers (depthwise, groupwise and pointwise convolution) in efficient architectures''
Architectures based on SSC achieve state-of-the-art performance compared to baselines on CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet classification benchmarks.
arXiv Detail & Related papers (2022-10-23T18:37:22Z) - CondenseNet V2: Sparse Feature Reactivation for Deep Networks [87.38447745642479]
Reusing features in deep networks through dense connectivity is an effective way to achieve high computational efficiency.
We propose an alternative approach named sparse feature reactivation (SFR), aiming at actively increasing the utility of features for reusing.
Our experiments show that the proposed models achieve promising performance on image classification (ImageNet and CIFAR) and object detection (MS COCO) in terms of both theoretical efficiency and practical speed.
arXiv Detail & Related papers (2021-04-09T14:12:43Z) - Keep the Gradients Flowing: Using Gradient Flow to Study Sparse Network
Optimization [16.85167651136133]
We take a broader view of training sparse networks and consider the role of regularization, optimization and architecture choices on sparse models.
We show that gradient flow in sparse networks can be improved by reconsidering aspects of the architecture design and the training regime.
arXiv Detail & Related papers (2021-02-02T18:40:26Z) - EfficientFCN: Holistically-guided Decoding for Semantic Segmentation [49.27021844132522]
State-of-the-art semantic segmentation algorithms are mostly based on dilated Fully Convolutional Networks (dilatedFCN)
We propose the EfficientFCN, whose backbone is a common ImageNet pre-trained network without any dilated convolution.
Such a framework achieves comparable or even better performance than state-of-the-art methods with only 1/3 of the computational cost.
arXiv Detail & Related papers (2020-08-24T14:48:23Z) - Learning to Hash with Graph Neural Networks for Recommender Systems [103.82479899868191]
Graph representation learning has attracted much attention in supporting high quality candidate search at scale.
Despite its effectiveness in learning embedding vectors for objects in the user-item interaction network, the computational costs to infer users' preferences in continuous embedding space are tremendous.
We propose a simple yet effective discrete representation learning framework to jointly learn continuous and discrete codes.
arXiv Detail & Related papers (2020-03-04T06:59:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.