A self-adapting super-resolution structures framework for automatic
design of GAN
- URL: http://arxiv.org/abs/2106.06011v1
- Date: Thu, 10 Jun 2021 19:11:29 GMT
- Title: A self-adapting super-resolution structures framework for automatic
design of GAN
- Authors: Yibo Guo, Haidi Wang, Yiming Fan, Shunyao Li, Mingliang Xu
- Abstract summary: We introduce a new super-resolution image reconstruction generative adversarial network framework.
We use a Bayesian optimization method used to optimize the hyper parameters of the generator and discriminator.
Our method adopts Bayesian optimization as a optimization policy of GAN in our model.
- Score: 15.351639834230383
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the development of deep learning, the single super-resolution image
reconstruction network models are becoming more and more complex. Small changes
in hyperparameters of the models have a greater impact on model performance. In
the existing works, experts have gradually explored a set of optimal model
parameters based on empirical values or performing brute-force search. In this
paper, we introduce a new super-resolution image reconstruction generative
adversarial network framework, and a Bayesian optimization method used to
optimizing the hyperparameters of the generator and discriminator. The
generator is made by self-calibrated convolution, and discriminator is made by
convolution lays. We have defined the hyperparameters such as the number of
network layers and the number of neurons. Our method adopts Bayesian
optimization as a optimization policy of GAN in our model. Not only can find
the optimal hyperparameter solution automatically, but also can construct a
super-resolution image reconstruction network, reducing the manual workload.
Experiments show that Bayesian optimization can search the optimal solution
earlier than the other two optimization algorithms.
Related papers
- Diffusion Models as Network Optimizers: Explorations and Analysis [71.69869025878856]
generative diffusion models (GDMs) have emerged as a promising new approach to network optimization.
In this study, we first explore the intrinsic characteristics of generative models.
We provide a concise theoretical and intuitive demonstration of the advantages of generative models over discriminative network optimization.
arXiv Detail & Related papers (2024-11-01T09:05:47Z) - Diffusion Model for Data-Driven Black-Box Optimization [54.25693582870226]
We focus on diffusion models, a powerful generative AI technology, and investigate their potential for black-box optimization.
We study two practical types of labels: 1) noisy measurements of a real-valued reward function and 2) human preference based on pairwise comparisons.
Our proposed method reformulates the design optimization problem into a conditional sampling problem, which allows us to leverage the power of diffusion models.
arXiv Detail & Related papers (2024-03-20T00:41:12Z) - Explicit Foundation Model Optimization with Self-Attentive Feed-Forward
Neural Units [4.807347156077897]
Iterative approximation methods using backpropagation enable the optimization of neural networks, but they remain computationally expensive when used at scale.
This paper presents an efficient alternative for optimizing neural networks that reduces the costs of scaling neural networks and provides high-efficiency optimizations for low-resource applications.
arXiv Detail & Related papers (2023-11-13T17:55:07Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - DC and SA: Robust and Efficient Hyperparameter Optimization of
Multi-subnetwork Deep Learning Models [0.974672460306765]
We present two novel strategies for optimization of deep learning models with a modular architecture constructed of multipleworks.
Our approaches show an increased optimization efficiency of up to 23.62x, and a final performance boost of up to 3.5% accuracy for classification and 4.4 MSE for regression.
arXiv Detail & Related papers (2022-02-24T00:29:05Z) - Modeling the Second Player in Distributionally Robust Optimization [90.25995710696425]
We argue for the use of neural generative models to characterize the worst-case distribution.
This approach poses a number of implementation and optimization challenges.
We find that the proposed approach yields models that are more robust than comparable baselines.
arXiv Detail & Related papers (2021-03-18T14:26:26Z) - Optimizing Large-Scale Hyperparameters via Automated Learning Algorithm [97.66038345864095]
We propose a new hyperparameter optimization method with zeroth-order hyper-gradients (HOZOG)
Specifically, we first formulate hyperparameter optimization as an A-based constrained optimization problem.
Then, we use the average zeroth-order hyper-gradients to update hyper parameters.
arXiv Detail & Related papers (2021-02-17T21:03:05Z) - Better call Surrogates: A hybrid Evolutionary Algorithm for
Hyperparameter optimization [18.359749929678635]
We propose a surrogate-assisted evolutionary algorithm (EA) for hyper parameter optimization of machine learning (ML) models.
The proposed STEADE model initially estimates the objective function landscape using RadialBasis Function, and then transfers the knowledge to an EA technique called Differential Evolution.
We empirically evaluate our model on the hyper parameter optimization problems as a part of the black box optimization challenge at NeurIPS 2020 and demonstrate the improvement brought about by STEADE over the vanilla EA.
arXiv Detail & Related papers (2020-12-11T16:19:59Z) - Iterative Surrogate Model Optimization (ISMO): An active learning
algorithm for PDE constrained optimization with deep neural networks [14.380314061763508]
We present a novel active learning algorithm, termed as iterative surrogate model optimization (ISMO)
This algorithm is based on deep neural networks and its key feature is the iterative selection of training data through a feedback loop between deep neural networks and any underlying standard optimization algorithm.
arXiv Detail & Related papers (2020-08-13T07:31:07Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z) - Automatic Setting of DNN Hyper-Parameters by Mixing Bayesian
Optimization and Tuning Rules [0.6875312133832078]
We build a new algorithm for evaluating and analyzing the results of the network on the training and validation sets.
We use a set of tuning rules to add new hyper-parameters and/or to reduce the hyper- parameter search space to select a better combination.
arXiv Detail & Related papers (2020-06-03T08:53:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.