Turbulence Strength $C_n^2$ Estimation from Video using Physics-based Deep Learning
- URL: http://arxiv.org/abs/2408.16623v1
- Date: Thu, 29 Aug 2024 15:31:51 GMT
- Title: Turbulence Strength $C_n^2$ Estimation from Video using Physics-based Deep Learning
- Authors: Ripon Kumar Saha, Esen Salcin, Jihoo Kim, Joseph Smith, Suren Jayasuriya,
- Abstract summary: Images captured from a long distance suffer from dynamic image distortion due to turbulent flow of air cells with random temperatures.
This phenomenon, known as image dancing, is commonly characterized by its refractive-index structure constant $C_n2$ as a measure of the turbulence strength.
We present a comparative analysis of classical image gradient methods for $C_n2$ estimation and modern deep learning-based methods leveraging convolutional neural networks.
- Score: 2.898558044216394
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Images captured from a long distance suffer from dynamic image distortion due to turbulent flow of air cells with random temperatures, and thus refractive indices. This phenomenon, known as image dancing, is commonly characterized by its refractive-index structure constant $C_n^2$ as a measure of the turbulence strength. For many applications such as atmospheric forecast model, long-range/astronomy imaging, and aviation safety, optical communication technology, $C_n^2$ estimation is critical for accurately sensing the turbulent environment. Previous methods for $C_n^2$ estimation include estimation from meteorological data (temperature, relative humidity, wind shear, etc.) for single-point measurements, two-ended pathlength measurements from optical scintillometer for path-averaged $C_n^2$, and more recently estimating $C_n^2$ from passive video cameras for low cost and hardware complexity. In this paper, we present a comparative analysis of classical image gradient methods for $C_n^2$ estimation and modern deep learning-based methods leveraging convolutional neural networks. To enable this, we collect a dataset of video capture along with reference scintillometer measurements for ground truth, and we release this unique dataset to the scientific community. We observe that deep learning methods can achieve higher accuracy when trained on similar data, but suffer from generalization errors to other, unseen imagery as compared to classical methods. To overcome this trade-off, we present a novel physics-based network architecture that combines learned convolutional layers with a differentiable image gradient method that maintains high accuracy while being generalizable across image datasets.
Related papers
- OCAI: Improving Optical Flow Estimation by Occlusion and Consistency Aware Interpolation [55.676358801492114]
We propose OCAI, a method that supports robust frame ambiguities by generating intermediate video frames alongside optical flows in between.
Our evaluations demonstrate superior quality and enhanced optical flow accuracy on established benchmarks such as Sintel and KITTI.
arXiv Detail & Related papers (2024-03-26T20:23:48Z) - Adaptive Federated Learning Over the Air [108.62635460744109]
We propose a federated version of adaptive gradient methods, particularly AdaGrad and Adam, within the framework of over-the-air model training.
Our analysis shows that the AdaGrad-based training algorithm converges to a stationary point at the rate of $mathcalO( ln(T) / T 1 - frac1alpha ).
arXiv Detail & Related papers (2024-03-11T09:10:37Z) - Scalable Bayesian uncertainty quantification with data-driven priors for radio interferometric imaging [5.678038945350452]
Next-generation radio interferometrics have the potential to unlock scientific discoveries thanks to their unprecedented angular resolution and sensitivity.
One key to unlocking their potential resides in handling the deluge and complexity of incoming data.
This work proposes a method coined QuantifAI to address uncertainty quantification in radio-interferometric imaging.
arXiv Detail & Related papers (2023-11-30T19:00:02Z) - RFTrans: Leveraging Refractive Flow of Transparent Objects for Surface
Normal Estimation and Manipulation [50.10282876199739]
This paper introduces RFTrans, an RGB-D-based method for surface normal estimation and manipulation of transparent objects.
It integrates the RFNet, which predicts refractive flow, object mask, and boundaries, followed by the F2Net, which estimates surface normal from the refractive flow.
A real-world robot grasping task witnesses an 83% success rate, proving that refractive flow can help enable direct sim-to-real transfer.
arXiv Detail & Related papers (2023-11-21T07:19:47Z) - High-precision interpolation of stellar atmospheres with a deep neural
network using a 1D convolutional auto encoder for feature extraction [0.0]
We establish a reliable, precise, lightweight, and fast method for recovering stellar model atmospheres.
We employ a fully connected deep neural network which in turn uses a 1D convolutional auto-encoder to extract the nonlinearities of a grid.
We show a higher precision with a convolutional auto-encoder than using principal component analysis as a feature extractor.
arXiv Detail & Related papers (2023-06-12T08:16:26Z) - {\Pi}-ML: A dimensional analysis-based machine learning parameterization
of optical turbulence in the atmospheric surface layer [0.0]
Turbulent fluctuations of the atmospheric refraction index, so-called optical turbulence, can significantly distort propagating laser beams.
We propose a physics-informed machine learning (ML) methodology, $Pi$-ML, based on dimensional analysis and gradient boosting to estimate $C_n2$.
arXiv Detail & Related papers (2023-04-24T15:38:22Z) - Deep Dynamic Scene Deblurring from Optical Flow [53.625999196063574]
Deblurring can provide visually more pleasant pictures and make photography more convenient.
It is difficult to model the non-uniform blur mathematically.
We develop a convolutional neural network (CNN) to restore the sharp images from the deblurred features.
arXiv Detail & Related papers (2023-01-18T06:37:21Z) - Dense Optical Flow from Event Cameras [55.79329250951028]
We propose to incorporate feature correlation and sequential processing into dense optical flow estimation from event cameras.
Our proposed approach computes dense optical flow and reduces the end-point error by 23% on MVSEC.
arXiv Detail & Related papers (2021-08-24T07:39:08Z) - MeerCRAB: MeerLICHT Classification of Real and Bogus Transients using
Deep Learning [0.0]
We present a deep learning pipeline based on the convolutional neural network architecture called $textttMeerCRAB$.
It is designed to filter out the so called 'bogus' detections from true astrophysical sources in the transient detection pipeline of the MeerLICHT telescope.
arXiv Detail & Related papers (2021-04-28T18:12:51Z) - End-to-end Learning for Inter-Vehicle Distance and Relative Velocity
Estimation in ADAS with a Monocular Camera [81.66569124029313]
We propose a camera-based inter-vehicle distance and relative velocity estimation method based on end-to-end training of a deep neural network.
The key novelty of our method is the integration of multiple visual clues provided by any two time-consecutive monocular frames.
We also propose a vehicle-centric sampling mechanism to alleviate the effect of perspective distortion in the motion field.
arXiv Detail & Related papers (2020-06-07T08:18:31Z) - Extracting dispersion curves from ambient noise correlations using deep
learning [1.0237120900821557]
We present a machine-learning approach to classifying the phases of surface wave dispersion curves.
Standard FTAN analysis of surfaces observed on an array of receivers is converted to an image.
We use a convolutional neural network (U-net) architecture with a supervised learning objective and incorporate transfer learning.
arXiv Detail & Related papers (2020-02-05T23:41:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.