GANTL: Towards Practical and Real-Time Topology Optimization with
Conditional GANs and Transfer Learning
- URL: http://arxiv.org/abs/2105.03045v1
- Date: Fri, 7 May 2021 03:13:32 GMT
- Title: GANTL: Towards Practical and Real-Time Topology Optimization with
Conditional GANs and Transfer Learning
- Authors: Mohammad Mahdi Behzadi, Horea T. Ilies
- Abstract summary: We present a deep learning method based on generative adversarial networks for generative design exploration.
The proposed method combines the generative power of conditional GANs with the knowledge transfer capabilities of transfer learning methods to predict optimal topologies for unseen boundary conditions.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many machine learning methods have been recently developed to circumvent the
high computational cost of the gradient-based topology optimization. These
methods typically require extensive and costly datasets for training, have a
difficult time generalizing to unseen boundary and loading conditions and to
new domains, and do not take into consideration topological constraints of the
predictions, which produces predictions with inconsistent topologies. We
present a deep learning method based on generative adversarial networks for
generative design exploration. The proposed method combines the generative
power of conditional GANs with the knowledge transfer capabilities of transfer
learning methods to predict optimal topologies for unseen boundary conditions.
We also show that the knowledge transfer capabilities embedded in the design of
the proposed algorithm significantly reduces the size of the training dataset
compared to the traditional deep learning neural or adversarial networks.
Moreover, we formulate a topological loss function based on the bottleneck
distance obtained from the persistent diagram of the structures and demonstrate
a significant improvement in the topological connectivity of the predicted
structures. We use numerous examples to explore the efficiency and accuracy of
the proposed approach for both seen and unseen boundary conditions in 2D.
Related papers
- Topological Generalization Bounds for Discrete-Time Stochastic Optimization Algorithms [15.473123662393169]
Deep neural networks (DNNs) show remarkable generalization properties.
The source of these capabilities remains elusive, defying the established statistical learning theory.
Recent studies have revealed that properties of training trajectories can be indicative of generalization.
arXiv Detail & Related papers (2024-07-11T17:56:03Z) - Lattice real-time simulations with learned optimal kernels [49.1574468325115]
We present a simulation strategy for the real-time dynamics of quantum fields inspired by reinforcement learning.
It builds on the complex Langevin approach, which it amends with system specific prior information.
arXiv Detail & Related papers (2023-10-12T06:01:01Z) - Neural Fields with Hard Constraints of Arbitrary Differential Order [61.49418682745144]
We develop a series of approaches for enforcing hard constraints on neural fields.
The constraints can be specified as a linear operator applied to the neural field and its derivatives.
Our approaches are demonstrated in a wide range of real-world applications.
arXiv Detail & Related papers (2023-06-15T08:33:52Z) - A stable deep adversarial learning approach for geological facies
generation [32.97208255533144]
Deep generative learning is a promising approach to overcome the limitations of traditional geostatistical simulation models.
This research aims to investigate the application of generative adversarial networks and deep variational inference for conditionally meandering channels in underground volumes.
arXiv Detail & Related papers (2023-05-12T14:21:14Z) - Scalable computation of prediction intervals for neural networks via
matrix sketching [79.44177623781043]
Existing algorithms for uncertainty estimation require modifying the model architecture and training procedure.
This work proposes a new algorithm that can be applied to a given trained neural network and produces approximate prediction intervals.
arXiv Detail & Related papers (2022-05-06T13:18:31Z) - Deep Equilibrium Assisted Block Sparse Coding of Inter-dependent
Signals: Application to Hyperspectral Imaging [71.57324258813675]
A dataset of inter-dependent signals is defined as a matrix whose columns demonstrate strong dependencies.
A neural network is employed to act as structure prior and reveal the underlying signal interdependencies.
Deep unrolling and Deep equilibrium based algorithms are developed, forming highly interpretable and concise deep-learning-based architectures.
arXiv Detail & Related papers (2022-03-29T21:00:39Z) - Analytically Tractable Inference in Deep Neural Networks [0.0]
Tractable Approximate Inference (TAGI) algorithm was shown to be a viable and scalable alternative to backpropagation for shallow fully-connected neural networks.
We are demonstrating how TAGI matches or exceeds the performance of backpropagation, for training classic deep neural network architectures.
arXiv Detail & Related papers (2021-03-09T14:51:34Z) - Real-Time Topology Optimization in 3D via Deep Transfer Learning [0.0]
We introduce a transfer learning method based on a convolutional neural network.
We show it can handle high-resolution 3D design domains of various shapes and topologies.
Our experiments achieved an average binary accuracy of around 95% at real-time prediction rates.
arXiv Detail & Related papers (2021-02-11T21:09:58Z) - An AI-Assisted Design Method for Topology Optimization Without
Pre-Optimized Training Data [68.8204255655161]
An AI-assisted design method based on topology optimization is presented, which is able to obtain optimized designs in a direct way.
Designs are provided by an artificial neural network, the predictor, on the basis of boundary conditions and degree of filling as input data.
arXiv Detail & Related papers (2020-12-11T14:33:27Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.