Using Deep Learning to Design High Aspect Ratio Fusion Devices
- URL: http://arxiv.org/abs/2409.00564v2
- Date: Wed, 6 Nov 2024 13:09:44 GMT
- Title: Using Deep Learning to Design High Aspect Ratio Fusion Devices
- Authors: P. Curvo, D. R. Ferreira, R. Jorge,
- Abstract summary: We train a machine learning model to construct configurations with favorable confinement properties.
It is shown that optimized configurations can be generated reliably using this method.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The design of fusion devices is typically based on computationally expensive simulations. This can be alleviated using high aspect ratio models that employ a reduced number of free parameters, especially in the case of stellarator optimization where non-axisymmetric magnetic fields with a large parameter space are optimized to satisfy certain performance criteria. However, optimization is still required to find configurations with properties such as low elongation, high rotational transform, finite plasma beta, and good fast particle confinement. In this work, we train a machine learning model to construct configurations with favorable confinement properties by finding a solution to the inverse design problem, that is, obtaining a set of model input parameters for given desired properties. Since the solution of the inverse problem is non-unique, a probabilistic approach, based on mixture density networks, is used. It is shown that optimized configurations can be generated reliably using this method.
Related papers
- AUGUR, A flexible and efficient optimization algorithm for identification of optimal adsorption sites [0.4188114563181615]
Our model combines graph neural networks and Gaussian processes to create a flexible, efficient, symmetry-aware, translation, and rotation-invariant predictor.
It determines the optimal position of large and complicated clusters with far fewer iterations than current state-of-the-art approaches.
It does not rely on hand-crafted features and can be seamlessly employed on any molecule without any alterations.
arXiv Detail & Related papers (2024-09-24T16:03:01Z) - Parameter Generation of Quantum Approximate Optimization Algorithm with Diffusion Model [3.6959187484738902]
Quantum computing presents a prospect for revolutionizing the field of probabilistic optimization.
We present the Quantum Approximate Optimization Algorithm (QAOA), which is a hybrid quantum-classical algorithm.
We show that the diffusion model is capable of learning the distribution of high-performing parameters and then synthesizing new parameters closer to optimal ones.
arXiv Detail & Related papers (2024-07-17T01:18:27Z) - Photonic Structures Optimization Using Highly Data-Efficient Deep
Learning: Application To Nanofin And Annular Groove Phase Masks [40.11095094521714]
Metasurfaces offer a flexible framework for the manipulation of light properties in the realm of thin film optics.
This study aims to introduce a surrogate optimization framework for these devices.
The framework is applied to develop two kinds of vortex phase masks (VPMs) tailored for application in astronomical high-contrast imaging.
arXiv Detail & Related papers (2023-09-05T07:19:14Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - Optimization of chemical mixers design via tensor trains and quantum
computing [0.0]
We demonstrate a novel optimization method, train optimization (TetraOpt), for the shape optimization of components focusing on a Y-shaped mixer of fluids.
Due to its high parallelization and more extensive global search, TetraOpt outperforms commonly used Bayesian optimization techniques in accuracy and runtime.
We discuss the extension of this approach to quantum computing, which potentially yields a more efficient approach.
arXiv Detail & Related papers (2023-04-24T17:56:56Z) - Symmetric Tensor Networks for Generative Modeling and Constrained
Combinatorial Optimization [72.41480594026815]
Constrained optimization problems abound in industry, from portfolio optimization to logistics.
One of the major roadblocks in solving these problems is the presence of non-trivial hard constraints which limit the valid search space.
In this work, we encode arbitrary integer-valued equality constraints of the form Ax=b, directly into U(1) symmetric networks (TNs) and leverage their applicability as quantum-inspired generative models.
arXiv Detail & Related papers (2022-11-16T18:59:54Z) - A Pareto-optimal compositional energy-based model for sampling and
optimization of protein sequences [55.25331349436895]
Deep generative models have emerged as a popular machine learning-based approach for inverse problems in the life sciences.
These problems often require sampling new designs that satisfy multiple properties of interest in addition to learning the data distribution.
arXiv Detail & Related papers (2022-10-19T19:04:45Z) - Tree ensemble kernels for Bayesian optimization with known constraints
over mixed-feature spaces [54.58348769621782]
Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search.
Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function.
Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.
arXiv Detail & Related papers (2022-07-02T16:59:37Z) - Optimizing High-Dimensional Physics Simulations via Composite Bayesian
Optimization [1.433758865948252]
Physical simulation-based optimization is a common task in science and engineering.
We develop a Bayesian optimization method leveraging tensor-based Gaussian process surrogates and trust region Bayesian optimization to effectively model the image outputs.
arXiv Detail & Related papers (2021-11-29T19:29:35Z) - Offline Model-Based Optimization via Normalized Maximum Likelihood
Estimation [101.22379613810881]
We consider data-driven optimization problems where one must maximize a function given only queries at a fixed set of points.
This problem setting emerges in many domains where function evaluation is a complex and expensive process.
We propose a tractable approximation that allows us to scale our method to high-capacity neural network models.
arXiv Detail & Related papers (2021-02-16T06:04:27Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.