Multi-objective and categorical global optimization of photonic
structures based on ResNet generative neural networks
- URL: http://arxiv.org/abs/2007.12551v1
- Date: Mon, 20 Jul 2020 06:50:53 GMT
- Title: Multi-objective and categorical global optimization of photonic
structures based on ResNet generative neural networks
- Authors: Jiaqi Jiang and Jonathan A. Fan
- Abstract summary: A residual network scheme enables GLOnets to evolve from a deep architecture to a shallow network that generates a narrow distribution of globally optimal devices.
We show that GLOnets can find the global optimum with orders of magnitude faster speeds compared to conventional algorithms.
Results indicate that advanced concepts in deep learning can push the capabilities of inverse design algorithms for photonics.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We show that deep generative neural networks, based on global topology
optimization networks (GLOnets), can be configured to perform the
multi-objective and categorical global optimization of photonic devices. A
residual network scheme enables GLOnets to evolve from a deep architecture,
which is required to properly search the full design space early in the
optimization process, to a shallow network that generates a narrow distribution
of globally optimal devices. As a proof-of-concept demonstration, we adapt our
method to design thin film stacks consisting of multiple material types.
Benchmarks with known globally-optimized anti-reflection structures indicate
that GLOnets can find the global optimum with orders of magnitude faster speeds
compared to conventional algorithms. We also demonstrate the utility of our
method in complex design tasks with its application to incandescent light
filters. These results indicate that advanced concepts in deep learning can
push the capabilities of inverse design algorithms for photonics.
Related papers
- Principled Architecture-aware Scaling of Hyperparameters [69.98414153320894]
Training a high-quality deep neural network requires choosing suitable hyperparameters, which is a non-trivial and expensive process.
In this work, we precisely characterize the dependence of initializations and maximal learning rates on the network architecture.
We demonstrate that network rankings can be easily changed by better training networks in benchmarks.
arXiv Detail & Related papers (2024-02-27T11:52:49Z) - Federated Multi-Level Optimization over Decentralized Networks [55.776919718214224]
We study the problem of distributed multi-level optimization over a network, where agents can only communicate with their immediate neighbors.
We propose a novel gossip-based distributed multi-level optimization algorithm that enables networked agents to solve optimization problems at different levels in a single timescale.
Our algorithm achieves optimal sample complexity, scaling linearly with the network size, and demonstrates state-of-the-art performance on various applications.
arXiv Detail & Related papers (2023-10-10T00:21:10Z) - Sensitivity-Aware Mixed-Precision Quantization and Width Optimization of Deep Neural Networks Through Cluster-Based Tree-Structured Parzen Estimation [4.748931281307333]
We introduce an innovative search mechanism for automatically selecting the best bit-width and layer-width for individual neural network layers.
This leads to a marked enhancement in deep neural network efficiency.
arXiv Detail & Related papers (2023-08-12T00:16:51Z) - Large-scale global optimization of ultra-high dimensional non-convex
landscapes based on generative neural networks [0.0]
We present an algorithm manage ultra-high dimensional optimization.
based on a deep generative network.
We show that our method performs better with fewer function evaluations compared to state-of-the-art algorithm.
arXiv Detail & Related papers (2023-07-09T00:05:59Z) - Multi-agent Reinforcement Learning with Graph Q-Networks for Antenna
Tuning [60.94661435297309]
The scale of mobile networks makes it challenging to optimize antenna parameters using manual intervention or hand-engineered strategies.
We propose a new multi-agent reinforcement learning algorithm to optimize mobile network configurations globally.
We empirically demonstrate the performance of the algorithm on an antenna tilt tuning problem and a joint tilt and power control problem in a simulated environment.
arXiv Detail & Related papers (2023-01-20T17:06:34Z) - On skip connections and normalisation layers in deep optimisation [32.51139594406463]
We introduce a general theoretical framework for the study of optimisation of deep neural networks.
Our framework determines the curvature and regularity properties of multilayer loss landscapes.
We identify a novel causal mechanism by which skip connections accelerate training.
arXiv Detail & Related papers (2022-10-10T06:22:46Z) - Learning Structures for Deep Neural Networks [99.8331363309895]
We propose to adopt the efficient coding principle, rooted in information theory and developed in computational neuroscience.
We show that sparse coding can effectively maximize the entropy of the output signals.
Our experiments on a public image classification dataset demonstrate that using the structure learned from scratch by our proposed algorithm, one can achieve a classification accuracy comparable to the best expert-designed structure.
arXiv Detail & Related papers (2021-05-27T12:27:24Z) - Reframing Neural Networks: Deep Structure in Overcomplete
Representations [41.84502123663809]
We introduce deep frame approximation, a unifying framework for representation learning with structured overcomplete frames.
We quantify structural differences with the deep frame potential, a data-independent measure of coherence linked to representation uniqueness and stability.
This connection to the established theory of overcomplete representations suggests promising new directions for principled deep network architecture design.
arXiv Detail & Related papers (2021-03-10T01:15:14Z) - Firefly Neural Architecture Descent: a General Approach for Growing
Neural Networks [50.684661759340145]
Firefly neural architecture descent is a general framework for progressively and dynamically growing neural networks.
We show that firefly descent can flexibly grow networks both wider and deeper, and can be applied to learn accurate but resource-efficient neural architectures.
In particular, it learns networks that are smaller in size but have higher average accuracy than those learned by the state-of-the-art methods.
arXiv Detail & Related papers (2021-02-17T04:47:18Z) - Dynamic Hierarchical Mimicking Towards Consistent Optimization
Objectives [73.15276998621582]
We propose a generic feature learning mechanism to advance CNN training with enhanced generalization ability.
Partially inspired by DSN, we fork delicately designed side branches from the intermediate layers of a given neural network.
Experiments on both category and instance recognition tasks demonstrate the substantial improvements of our proposed method.
arXiv Detail & Related papers (2020-03-24T09:56:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.