Measurement of Hybrid Rocket Solid Fuel Regression Rate for a Slab
Burner using Deep Learning
- URL: http://arxiv.org/abs/2108.11276v1
- Date: Wed, 25 Aug 2021 14:57:23 GMT
- Title: Measurement of Hybrid Rocket Solid Fuel Regression Rate for a Slab
Burner using Deep Learning
- Authors: Gabriel Surina III, Georgios Georgalis, Siddhant S. Aphale, Abani
Patra, Paul E. DesJardin
- Abstract summary: This study presents an imaging-based deep learning tool to measure the fuel regression rate in a 2D slab burner experiment for hybrid rocket fuels.
A DSLR camera with a high intensity flash is used to capture images throughout the burn and the images are then used to find the fuel boundary to calculate the regression rate.
A U-net convolutional neural network architecture is explored to segment the fuel from the experimental images.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study presents an imaging-based deep learning tool to measure the fuel
regression rate in a 2D slab burner experiment for hybrid rocket fuels. The
slab burner experiment is designed to verify mechanistic models of reacting
boundary layer combustion in hybrid rockets by the measurement of fuel
regression rates. A DSLR camera with a high intensity flash is used to capture
images throughout the burn and the images are then used to find the fuel
boundary to calculate the regression rate. A U-net convolutional neural network
architecture is explored to segment the fuel from the experimental images. A
Monte-Carlo Dropout process is used to quantify the regression rate uncertainty
produced from the network. The U-net computed regression rates are compared
with values from other techniques from literature and show error less than 10%.
An oxidizer flux dependency study is performed and shows the U-net predictions
of regression rates are accurate and independent of the oxidizer flux, when the
images in the training set are not over-saturated. Training with monochrome
images is explored and is not successful at predicting the fuel regression rate
from images with high noise. The network is superior at filtering out noise
introduced by soot, pitting, and wax deposition on the chamber glass as well as
the flame when compared to traditional image processing techniques, such as
threshold binary conversion and spatial filtering. U-net consistently provides
low error image segmentations to allow accurate computation of the regression
rate of the fuel.
Related papers
- Efficient Diffusion as Low Light Enhancer [63.789138528062225]
Reflectance-Aware Trajectory Refinement (RATR) is a simple yet effective module to refine the teacher trajectory using the reflectance component of images.
textbfReflectance-aware textbfDiffusion with textbfDistilled textbfTrajectory (textbfReDDiT) is an efficient and flexible distillation framework tailored for Low-Light Image Enhancement (LLIE)
arXiv Detail & Related papers (2024-10-16T08:07:18Z) - ReDistill: Residual Encoded Distillation for Peak Memory Reduction [5.532610278442954]
We propose residual encoded distillation (ReDistill) for peak memory reduction in a teacher-student framework.
For image classification, our method yields 2x-3.2x measured peak memory on an edge GPU with negligible degradation in accuracy for most CNN based architectures.
For diffusion-based image generation, our proposed distillation method yields a denoising network with 4x lower theoretical peak memory.
arXiv Detail & Related papers (2024-06-06T04:44:10Z) - Deep Equilibrium Diffusion Restoration with Parallel Sampling [120.15039525209106]
Diffusion model-based image restoration (IR) aims to use diffusion models to recover high-quality (HQ) images from degraded images, achieving promising performance.
Most existing methods need long serial sampling chains to restore HQ images step-by-step, resulting in expensive sampling time and high computation costs.
In this work, we aim to rethink the diffusion model-based IR models through a different perspective, i.e., a deep equilibrium (DEQ) fixed point system, called DeqIR.
arXiv Detail & Related papers (2023-11-20T08:27:56Z) - Deep Multi-Threshold Spiking-UNet for Image Processing [51.88730892920031]
This paper introduces the novel concept of Spiking-UNet for image processing, which combines the power of Spiking Neural Networks (SNNs) with the U-Net architecture.
To achieve an efficient Spiking-UNet, we face two primary challenges: ensuring high-fidelity information propagation through the network via spikes and formulating an effective training strategy.
Experimental results show that, on image segmentation and denoising, our Spiking-UNet achieves comparable performance to its non-spiking counterpart.
arXiv Detail & Related papers (2023-07-20T16:00:19Z) - Accelerating Multiframe Blind Deconvolution via Deep Learning [0.0]
Ground-based solar image restoration is a computationally expensive procedure.
We propose a new method to accelerate the restoration based on algorithm unrolling.
We show that both methods significantly reduce the restoration time compared to the standard optimization procedure.
arXiv Detail & Related papers (2023-06-21T07:53:00Z) - Unsupervised Wildfire Change Detection based on Contrastive Learning [1.53934570513443]
The accurate characterization of the severity of the wildfire event contributes to the characterization of the fuel conditions in fire-prone areas.
The aim of this study is to develop an autonomous system built on top of high-resolution multispectral satellite imagery, with an advanced deep learning method for detecting burned area change.
arXiv Detail & Related papers (2022-11-26T20:13:14Z) - An Empirical Analysis of Recurrent Learning Algorithms In Neural Lossy
Image Compression Systems [73.48927855855219]
Recent advances in deep learning have resulted in image compression algorithms that outperform JPEG and JPEG 2000 on the standard Kodak benchmark.
In this paper, we perform the first large-scale comparison of recent state-of-the-art hybrid neural compression algorithms.
arXiv Detail & Related papers (2022-01-27T19:47:51Z) - Is Deep Image Prior in Need of a Good Education? [57.3399060347311]
Deep image prior was introduced as an effective prior for image reconstruction.
Despite its impressive reconstructive properties, the approach is slow when compared to learned or traditional reconstruction techniques.
We develop a two-stage learning paradigm to address the computational challenge.
arXiv Detail & Related papers (2021-11-23T15:08:26Z) - Conditional Variational Autoencoder for Learned Image Reconstruction [5.487951901731039]
We develop a novel framework that approximates the posterior distribution of the unknown image at each query observation.
It handles implicit noise models and priors, it incorporates the data formation process (i.e., the forward operator), and the learned reconstructive properties are transferable between different datasets.
arXiv Detail & Related papers (2021-10-22T10:02:48Z) - Two-Stage Single Image Reflection Removal with Reflection-Aware Guidance [78.34235841168031]
We present a novel two-stage network with reflection-aware guidance (RAGNet) for single image reflection removal (SIRR)
RAG can be used (i) to mitigate the effect of reflection from the observation, and (ii) to generate mask in partial convolution for mitigating the effect of deviating from linear combination hypothesis.
Experiments on five commonly used datasets demonstrate the quantitative and qualitative superiority of our RAGNet in comparison to the state-of-the-art SIRR methods.
arXiv Detail & Related papers (2020-12-02T03:14:57Z) - Enhancement of damaged-image prediction through Cahn-Hilliard Image
Inpainting [0.0]
We train a neural network based on dense layers with the training set of MNIST.
We then contaminate the test set with damage of different types and intensities.
We compare the prediction accuracy of the neural network with and without applying the Cahn-Hilliard filter to the damaged images test.
arXiv Detail & Related papers (2020-07-21T12:29:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.