Active Learning over DNN: Automated Engineering Design Optimization for
Fluid Dynamics Based on Self-Simulated Dataset
- URL: http://arxiv.org/abs/2001.08075v2
- Date: Thu, 23 Jan 2020 03:16:45 GMT
- Title: Active Learning over DNN: Automated Engineering Design Optimization for
Fluid Dynamics Based on Self-Simulated Dataset
- Authors: Yang Chen
- Abstract summary: This research applies a test-proven deep learning architecture to predict the performance under various restrictions.
The major challenge is the vast amount of data points Deep Neural Network (DNN) demands, which is improvident to simulate.
The final stage, a user interface, made the model capable of optimizing with given user input of minimum area and viscosity.
- Score: 4.4074213830420055
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Optimizing fluid-dynamic performance is an important engineering task.
Traditionally, experts design shapes based on empirical estimations and verify
them through expensive experiments. This costly process, both in terms of time
and space, may only explore a limited number of shapes and lead to sub-optimal
designs. In this research, a test-proven deep learning architecture is applied
to predict the performance under various restrictions and search for better
shapes by optimizing the learned prediction function. The major challenge is
the vast amount of data points Deep Neural Network (DNN) demands, which is
improvident to simulate. To remedy this drawback, a Frequentist active learning
is used to explore regions of the output space that DNN predicts promising.
This operation reduces the number of data samples demanded from ~8000 to 625.
The final stage, a user interface, made the model capable of optimizing with
given user input of minimum area and viscosity. Flood fill is used to define a
boundary area function so that the optimal shape does not bypass the minimum
area. Stochastic Gradient Langevin Dynamics (SGLD) is employed to make sure the
ultimate shape is optimized while circumventing the required area. Jointly,
shapes with extremely low drags are found explored by a practical user
interface with no human domain knowledge and modest computation overhead.
Related papers
- Event-based Shape from Polarization with Spiking Neural Networks [5.200503222390179]
We introduce the Single-Timestep and Multi-Timestep Spiking UNets for effective and efficient surface normal estimation.
Our work contributes to the advancement of SNNs in event-based sensing.
arXiv Detail & Related papers (2023-12-26T14:43:26Z) - Geometry-Informed Neural Operator for Large-Scale 3D PDEs [76.06115572844882]
We propose the geometry-informed neural operator (GINO) to learn the solution operator of large-scale partial differential equations.
We successfully trained GINO to predict the pressure on car surfaces using only five hundred data points.
arXiv Detail & Related papers (2023-09-01T16:59:21Z) - INFINITY: Neural Field Modeling for Reynolds-Averaged Navier-Stokes
Equations [13.242926257057084]
INFINITY is a deep learning model that encodes geometric information and physical fields into compact representations.
Our framework achieves state-of-the-art performance by accurately inferring physical fields throughout the volume and surface.
Our model can correctly predict drag and lift coefficients while adhering to the equations.
arXiv Detail & Related papers (2023-07-25T14:35:55Z) - Fast Exploration of the Impact of Precision Reduction on Spiking Neural
Networks [63.614519238823206]
Spiking Neural Networks (SNNs) are a practical choice when the target hardware reaches the edge of computing.
We employ an Interval Arithmetic (IA) model to develop an exploration methodology that takes advantage of the capability of such a model to propagate the approximation error.
arXiv Detail & Related papers (2022-11-22T15:08:05Z) - HSurf-Net: Normal Estimation for 3D Point Clouds by Learning Hyper
Surfaces [54.77683371400133]
We propose a novel normal estimation method called HSurf-Net, which can accurately predict normals from point clouds with noise and density variations.
Experimental results show that our HSurf-Net achieves the state-of-the-art performance on the synthetic shape dataset.
arXiv Detail & Related papers (2022-10-13T16:39:53Z) - Using Gradient to Boost the Generalization Performance of Deep Learning
Models for Fluid Dynamics [0.0]
We present a novel work to increase the generalization capabilities of Deep Learning.
Our strategy has shown good results towards a better generalization of DL networks.
arXiv Detail & Related papers (2022-10-09T10:20:09Z) - Data-informed Deep Optimization [3.331457049134526]
We propose a data-informed deep optimization (DiDo) approach to solve high-dimensional design problems.
We use a deep neural network (DNN) to learn the feasible region and to sample feasible points for fitting the objective function.
Our results indicate that the DiDo approach empowered by DNN is flexible and promising for solving general high-dimensional design problems in practice.
arXiv Detail & Related papers (2021-07-17T02:53:54Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Learning to Continuously Optimize Wireless Resource In Episodically
Dynamic Environment [55.91291559442884]
This work develops a methodology that enables data-driven methods to continuously learn and optimize in a dynamic environment.
We propose to build the notion of continual learning into the modeling process of learning wireless systems.
Our design is based on a novel min-max formulation which ensures certain fairness" across different data samples.
arXiv Detail & Related papers (2020-11-16T08:24:34Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.