A Novel Generator with Auxiliary Branch for Improving GAN Performance
- URL: http://arxiv.org/abs/2112.14968v2
- Date: Sat, 3 Feb 2024 00:27:44 GMT
- Title: A Novel Generator with Auxiliary Branch for Improving GAN Performance
- Authors: Seung Park and Yong-Goo Shin
- Abstract summary: This brief introduces a novel generator architecture that produces the image by combining features obtained through two different branches.
The goal of the main branch is to produce the image by passing through the multiple residual blocks, whereas the auxiliary branch is to convey the coarse information in the earlier layer to the later one.
To prove the superiority of the proposed method, this brief provides extensive experiments using various standard datasets.
- Score: 7.005458308454871
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The generator in the generative adversarial network (GAN) learns image
generation in a coarse-to-fine manner in which earlier layers learn the overall
structure of the image and the latter ones refine the details. To propagate the
coarse information well, recent works usually build their generators by
stacking up multiple residual blocks. Although the residual block can produce a
high-quality image as well as be trained stably, it often impedes the
information flow in the network. To alleviate this problem, this brief
introduces a novel generator architecture that produces the image by combining
features obtained through two different branches: the main and auxiliary
branches. The goal of the main branch is to produce the image by passing
through the multiple residual blocks, whereas the auxiliary branch is to convey
the coarse information in the earlier layer to the later one. To combine the
features in the main and auxiliary branches successfully, we also propose a
gated feature fusion module that controls the information flow in those
branches. To prove the superiority of the proposed method, this brief provides
extensive experiments using various standard datasets including CIFAR-10,
CIFAR-100, LSUN, CelebA-HQ, AFHQ, and tiny-ImageNet. Furthermore, we conducted
various ablation studies to demonstrate the generalization ability of the
proposed method. Quantitative evaluations prove that the proposed method
exhibits impressive GAN performance in terms of Inception score (IS) and
Frechet inception distance (FID). For instance, the proposed method boosts the
FID and IS scores on the tiny-ImageNet dataset from 35.13 to 25.00 and 20.23 to
25.57, respectively.
Related papers
- D$^3$: Scaling Up Deepfake Detection by Learning from Discrepancy [11.239248133240126]
We seek a step toward a universal deepfake detection system with better generalization and robustness.
We propose our Discrepancy Deepfake Detector framework, whose core idea is to learn the universal artifacts from multiple generators.
Our framework achieves a 5.3% accuracy improvement in the OOD testing compared to the current SOTA methods while maintaining the ID performance.
arXiv Detail & Related papers (2024-04-06T10:45:02Z) - Leveraging Representations from Intermediate Encoder-blocks for Synthetic Image Detection [13.840950434728533]
State-of-the-art Synthetic Image Detection (SID) research has led to strong evidence on the advantages of feature extraction from foundation models.
We leverage the image representations extracted by intermediate Transformer blocks of CLIP's image-encoder via a lightweight network.
Our method is compared against the state-of-the-art by evaluating it on 20 test datasets and exhibits an average +10.6% absolute performance improvement.
arXiv Detail & Related papers (2024-02-29T12:18:43Z) - Guided Image Restoration via Simultaneous Feature and Image Guided
Fusion [67.30078778732998]
We propose a Simultaneous Feature and Image Guided Fusion (SFIGF) network.
It considers feature and image-level guided fusion following the guided filter (GF) mechanism.
Since guided fusion is implemented in both feature and image domains, the proposed SFIGF is expected to faithfully reconstruct both contextual and textual information.
arXiv Detail & Related papers (2023-12-14T12:15:45Z) - Exploring Incompatible Knowledge Transfer in Few-shot Image Generation [107.81232567861117]
Few-shot image generation learns to generate diverse and high-fidelity images from a target domain using a few reference samples.
Existing F SIG methods select, preserve and transfer prior knowledge from a source generator to learn the target generator.
We propose knowledge truncation, which is a complementary operation to knowledge preservation and is implemented by a lightweight pruning-based method.
arXiv Detail & Related papers (2023-04-15T14:57:15Z) - Latent Multi-Relation Reasoning for GAN-Prior based Image
Super-Resolution [61.65012981435095]
LAREN is a graph-based disentanglement that constructs a superior disentangled latent space via hierarchical multi-relation reasoning.
We show that LAREN achieves superior large-factor image SR and outperforms the state-of-the-art consistently across multiple benchmarks.
arXiv Detail & Related papers (2022-08-04T19:45:21Z) - Learning Enriched Features for Fast Image Restoration and Enhancement [166.17296369600774]
This paper presents a holistic goal of maintaining spatially-precise high-resolution representations through the entire network.
We learn an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
Our approach achieves state-of-the-art results for a variety of image processing tasks, including defocus deblurring, image denoising, super-resolution, and image enhancement.
arXiv Detail & Related papers (2022-04-19T17:59:45Z) - Effective Shortcut Technique for GAN [6.007303976935779]
generative adversarial network (GAN)-based image generation techniques design their generators by stacking up multiple residual blocks.
The residual block generally contains a shortcut, ie skip connection, which effectively supports information propagation in the network.
We propose a novel shortcut method, called the gated shortcut, which not only embraces the strength point of the residual block but also further boosts the GAN performance.
arXiv Detail & Related papers (2022-01-27T07:14:45Z) - Generative Convolution Layer for Image Generation [8.680676599607125]
This paper introduces a novel convolution method, called generative convolution (GConv)
GConv first selects useful kernels compatible with the given latent vector, and then linearly combines the selected kernels to make latent-specific kernels.
Using the latent-specific kernels, the proposed method produces the latent-specific features which encourage the generator to produce high-quality images.
arXiv Detail & Related papers (2021-11-30T07:14:12Z) - Single Image Dehazing with An Independent Detail-Recovery Network [117.86146907611054]
We propose a single image dehazing method with an independent Detail Recovery Network (DRN)
The DRN aims to recover the dehazed image details through local and global branches respectively.
Our method outperforms the state-of-the-art dehazing methods both quantitatively and qualitatively.
arXiv Detail & Related papers (2021-09-22T02:49:43Z) - Multi-Stage Progressive Image Restoration [167.6852235432918]
We propose a novel synergistic design that can optimally balance these competing goals.
Our main proposal is a multi-stage architecture, that progressively learns restoration functions for the degraded inputs.
The resulting tightly interlinked multi-stage architecture, named as MPRNet, delivers strong performance gains on ten datasets.
arXiv Detail & Related papers (2021-02-04T18:57:07Z) - Remote sensing image fusion based on Bayesian GAN [9.852262451235472]
We build a two-stream generator network with PAN and MS images as input, which consists of three parts: feature extraction, feature fusion and image reconstruction.
We leverage Markov discriminator to enhance the ability of generator to reconstruct the fusion image, so that the result image can retain more details.
Experiments on QuickBird and WorldView datasets show that the model proposed in this paper can effectively fuse PAN and MS images.
arXiv Detail & Related papers (2020-09-20T16:15:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.