Title: {\Pi}-ML: A dimensional analysis-based machine learning parameterization
of optical turbulence in the atmospheric surface layer
Authors: Maximilian Pierzyna and Rudolf Saathof and Sukanta Basu
Abstract summary: Turbulent fluctuations of the atmospheric refraction index, so-called optical turbulence, can significantly distort propagating laser beams.
We propose a physics-informed machine learning (ML) methodology, $Pi$-ML, based on dimensional analysis and gradient boosting to estimate $C_n2$.
Abstract: Turbulent fluctuations of the atmospheric refraction index, so-called optical
turbulence, can significantly distort propagating laser beams. Therefore,
modeling the strength of these fluctuations ($C_n^2$) is highly relevant for
the successful development and deployment of future free-space optical
communication links. In this letter, we propose a physics-informed machine
learning (ML) methodology, $\Pi$-ML, based on dimensional analysis and gradient
boosting to estimate $C_n^2$. Through a systematic feature importance analysis,
we identify the normalized variance of potential temperature as the dominating
feature for predicting $C_n^2$. For statistical robustness, we train an
ensemble of models which yields high performance on the out-of-sample data of
$R^2=0.958\pm0.001$.
Related papers
Turbulence Strength $C_n^2$ Estimation from Video using Physics-based Deep Learning [2.898558044216394] Images captured from a long distance suffer from dynamic image distortion due to turbulent flow of air cells with random temperatures.
This phenomenon, known as image dancing, is commonly characterized by its refractive-index structure constant $C_n2$ as a measure of the turbulence strength.
We present a comparative analysis of classical image gradient methods for $C_n2$ estimation and modern deep learning-based methods leveraging convolutional neural networks. arXivDetail & Related papers (2024-08-29T15:31:51Z)
Adaptive Federated Learning Over the Air [108.62635460744109] We propose a federated version of adaptive gradient methods, particularly AdaGrad and Adam, within the framework of over-the-air model training.
Our analysis shows that the AdaGrad-based training algorithm converges to a stationary point at the rate of $mathcalO( ln(T) / T 1 - frac1alpha ). arXivDetail & Related papers (2024-03-11T09:10:37Z)
Machine learning enabled experimental design and parameter estimation
for ultrafast spin dynamics [54.172707311728885] We introduce a methodology that combines machine learning with Bayesian optimal experimental design (BOED)
Our method employs a neural network model for large-scale spin dynamics simulations for precise distribution and utility calculations in BOED.
Our numerical benchmarks demonstrate the superior performance of our method in guiding XPFS experiments, predicting model parameters, and yielding more informative measurements within limited experimental time. arXivDetail & Related papers (2023-06-03T06:19:20Z)
Capturing dynamical correlations using implicit neural representations [85.66456606776552] We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data. arXivDetail & Related papers (2023-04-08T07:55:36Z)
Neural Inference of Gaussian Processes for Time Series Data of Quasars [72.79083473275742] We introduce a new model that enables it to describe quasar spectra completely.
We also introduce a new method of inference of Gaussian process parameters, which we call $textitNeural Inference$.
The combination of both the CDRW model and Neural Inference significantly outperforms the baseline DRW and MLE. arXivDetail & Related papers (2022-11-17T13:01:26Z)
Supernova Light Curves Approximation based on Neural Network Models [53.180678723280145] Photometric data-driven classification of supernovae becomes a challenge due to the appearance of real-time processing of big data in astronomy.
Recent studies have demonstrated the superior quality of solutions based on various machine learning models.
We study the application of multilayer perceptron (MLP), bayesian neural network (BNN), and normalizing flows (NF) to approximate observations for a single light curve. arXivDetail & Related papers (2022-06-27T13:46:51Z)
Single Trajectory Nonparametric Learning of Nonlinear Dynamics [8.438421942654292] Given a single trajectory of a dynamical system, we analyze the performance of the nonparametric least squares estimator (LSE)
We leverage recently developed information-theoretic methods to establish the optimality of the LSE for non hypotheses classes.
We specialize our results to a number of scenarios of practical interest, such as Lipschitz dynamics, generalized linear models, and dynamics described by functions in certain classes of Reproducing Kernel Hilbert Spaces (RKHS) arXivDetail & Related papers (2022-02-16T19:38:54Z)
Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825] characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model. arXivDetail & Related papers (2021-11-15T09:08:27Z)
Support estimation in high-dimensional heteroscedastic mean regression [2.28438857884398] We consider a linear mean regression model with random design and potentially heteroscedastic, heavy-tailed errors.
We use a strictly convex, smooth variant of the Huber loss function with tuning parameter depending on the parameters of the problem.
For the resulting estimator we show sign-consistency and optimal rates of convergence in the $ell_infty$ norm. arXivDetail & Related papers (2020-11-03T09:46:31Z)
Differentiable Programming for Hyperspectral Unmixing using a
Physics-based Dispersion Model [9.96234892716562] In this paper, spectral variation is considered from a physics-based approach and incorporated into an end-to-end spectral unmixing algorithm.
A technique for inverse rendering using a convolutional neural network is introduced to enhance performance and speed when training data is available.
Results achieve state-of-the-art on both infrared and visible-to-near-infrared (VNIR) datasets. arXivDetail & Related papers (2020-07-12T14:16:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.