Towards Real-Time Inference of Thin Liquid Film Thickness Profiles from Interference Patterns Using Vision Transformers
- URL: http://arxiv.org/abs/2510.25157v1
- Date: Wed, 29 Oct 2025 04:19:52 GMT
- Title: Towards Real-Time Inference of Thin Liquid Film Thickness Profiles from Interference Patterns Using Vision Transformers
- Authors: Gautam A. Viruthagiri, Arnuv Tandon, Gerald G. Fuller, Vinny Chandran Suja,
- Abstract summary: Vision transformer-based approach for real-time inference of thin liquid film thickness profiles directly from isolated interferograms.<n>Network demonstrates state-of-the-art performance on noisy, rapidly-evolving films with motion artifacts.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Thin film interferometry is a powerful technique for non-invasively measuring liquid film thickness with applications in ophthalmology, but its clinical translation is hindered by the challenges in reconstructing thickness profiles from interference patterns - an ill-posed inverse problem complicated by phase periodicity, imaging noise and ambient artifacts. Traditional reconstruction methods are either computationally intensive, sensitive to noise, or require manual expert analysis, which is impractical for real-time diagnostics. To address this challenge, here we present a vision transformer-based approach for real-time inference of thin liquid film thickness profiles directly from isolated interferograms. Trained on a hybrid dataset combining physiologically-relevant synthetic and experimental tear film data, our model leverages long-range spatial correlations to resolve phase ambiguities and reconstruct temporally coherent thickness profiles in a single forward pass from dynamic interferograms acquired in vivo and ex vivo. The network demonstrates state-of-the-art performance on noisy, rapidly-evolving films with motion artifacts, overcoming limitations of conventional phase-unwrapping and iterative fitting methods. Our data-driven approach enables automated, consistent thickness reconstruction at real-time speeds on consumer hardware, opening new possibilities for continuous monitoring of pre-lens ocular tear films and non-invasive diagnosis of conditions such as the dry eye disease.
Related papers
- Real-time topology-aware M-mode OCT segmentation for robotic deep anterior lamellar keratoplasty (DALK) guidance [4.803245501695445]
We present a lightweight, topology aware M-mode segmentation pipeline based on UNeXt.<n>The proposed system achieves end to end throughput exceeding 80 Hz measured over the complete preprocessing inference overlay pipeline.
arXiv Detail & Related papers (2026-02-02T20:58:04Z) - Dynamic Reconstruction of Ultrasound-Derived Flow Fields With Physics-Informed Neural Fields [0.0]
We present a physics-informed neural field model for estimating blood flow from sparse and noisy ultrasound data.<n>This model achieves consistently low mean squared error in denoising and inpainting both synthetic and real datasets.<n>We adapt methods that have proven effective in other imaging modalities to address the specific challenge of ultrasound-based flow reconstruction.
arXiv Detail & Related papers (2025-11-03T18:05:11Z) - Accelerating 3D Photoacoustic Computed Tomography with End-to-End Physics-Aware Neural Operators [74.65171736966131]
Photoacoustic computed tomography (PACT) combines optical contrast with ultrasonic resolution, achieving deep-tissue imaging beyond the optical diffusion limit.<n>Current implementations require dense transducer arrays and prolonged acquisition times, limiting clinical translation.<n>We introduce Pano, an end-to-end physics-aware model that directly learns the inverse acoustic mapping from sensor measurements to volumetric reconstructions.
arXiv Detail & Related papers (2025-09-11T23:12:55Z) - Self-supervised physics-informed generative networks for phase retrieval from a single X-ray hologram [0.4221292142376107]
We present a self-learning approach for solving the inverse problem of phase retrieval in the near-field regime of Fresnel theory.<n>Unlike most deep learning approaches for phase retrieval, our approach does not require paired, unpaired, or simulated training data.
arXiv Detail & Related papers (2025-08-21T13:06:06Z) - Topology-based deep-learning segmentation method for deep anterior lamellar keratoplasty (DALK) surgical guidance using M-mode OCT data [0.0]
We develop a topology-based deep-learning segmentation method that integrates a topological loss function with a modified network architecture.<n>This approach effectively reduces the effects of noise and improves segmentation speed, precision, and stability.
arXiv Detail & Related papers (2025-01-07T19:57:15Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - AiAReSeg: Catheter Detection and Segmentation in Interventional
Ultrasound using Transformers [75.20925220246689]
endovascular surgeries are performed using the golden standard of Fluoroscopy, which uses ionising radiation to visualise catheters and vasculature.
This work proposes a solution using an adaptation of a state-of-the-art machine learning transformer architecture to detect and segment catheters in axial interventional Ultrasound image sequences.
arXiv Detail & Related papers (2023-09-25T19:34:12Z) - Fluctuation-based deconvolution in fluorescence microscopy using
plug-and-play denoisers [2.236663830879273]
spatial resolution of images of living samples obtained by fluorescence microscopes is physically limited due to the diffraction of visible light.
Several deconvolution and super-resolution techniques have been proposed to overcome this limitation.
arXiv Detail & Related papers (2023-03-20T15:43:52Z) - A Spatiotemporal Model for Precise and Efficient Fully-automatic 3D
Motion Correction in OCT [10.550562752812894]
OCT instruments image by-scanning a focused light spot across the retina, acquiring cross-sectional images to generate data.
Patient eye motion during the acquisition poses unique challenges: non-rigid, distorted distortions occur, leading to gaps in data.
We present a new distortion model and a corresponding fully-automatic, reference-free optimization strategy for computational robustness.
arXiv Detail & Related papers (2022-09-15T11:48:53Z) - OADAT: Experimental and Synthetic Clinical Optoacoustic Data for
Standardized Image Processing [62.993663757843464]
Optoacoustic (OA) imaging is based on excitation of biological tissues with nanosecond-duration laser pulses followed by detection of ultrasound waves generated via light-absorption-mediated thermoelastic expansion.
OA imaging features a powerful combination between rich optical contrast and high resolution in deep tissues.
No standardized datasets generated with different types of experimental set-up and associated processing methods are available to facilitate advances in broader applications of OA in clinical settings.
arXiv Detail & Related papers (2022-06-17T08:11:26Z) - Transient motion classification through turbid volumes via parallelized
single-photon detection and deep contrastive embedding [12.806431481376787]
We propose a technique termed Classifying Rapid decorrelation Events via Parallelized single photon dEtection (CREPE).
It can probe and classify different decorrelating movements hidden underneath turbid volume with high sensitivity using parallelized speckle from a $32times32 pixel SPAD array.
This has the potential to be applied to monitor normally deep tissue motion patterns, for example identifying abnormal cerebral blood flow events.
arXiv Detail & Related papers (2022-04-04T14:27:36Z) - Data-driven generation of plausible tissue geometries for realistic
photoacoustic image synthesis [53.65837038435433]
Photoacoustic tomography (PAT) has the potential to recover morphological and functional tissue properties.
We propose a novel approach to PAT data simulation, which we refer to as "learning to simulate"
We leverage the concept of Generative Adversarial Networks (GANs) trained on semantically annotated medical imaging data to generate plausible tissue geometries.
arXiv Detail & Related papers (2021-03-29T11:30:18Z) - Intrinsic Temporal Regularization for High-resolution Human Video
Synthesis [59.54483950973432]
temporal consistency is crucial for extending image processing pipelines to the video domain.
We propose an effective intrinsic temporal regularization scheme, where an intrinsic confidence map is estimated via the frame generator to regulate motion estimation.
We apply our intrinsic temporal regulation to single-image generator, leading to a powerful " INTERnet" capable of generating $512times512$ resolution human action videos.
arXiv Detail & Related papers (2020-12-11T05:29:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.