Bessel Equivariant Networks for Inversion of Transmission Effects in
Multi-Mode Optical Fibres
- URL: http://arxiv.org/abs/2207.12849v1
- Date: Tue, 26 Jul 2022 12:29:12 GMT
- Title: Bessel Equivariant Networks for Inversion of Transmission Effects in
Multi-Mode Optical Fibres
- Authors: Joshua Mitton, Simon Peter Mekhail, Miles Padgett, Daniele Faccio,
Marco Aversa, Roderick Murray-Smith
- Abstract summary: We develop a new type of model for solving the task of inverting the transmission effects of multi-mode optical fibres.
We use the azimuthal correlations known to exist in fibre speckle patterns to account for the difference in spatial arrangement between input and speckle patterns.
This model can scale to previously unachievable resolutions of imaging with multi-mode optical fibres and is demonstrated on $256 times 256$ pixel images.
- Score: 3.2981146586835703
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We develop a new type of model for solving the task of inverting the
transmission effects of multi-mode optical fibres through the construction of
an $\mathrm{SO}^{+}(2,1)$-equivariant neural network. This model takes
advantage of the of the azimuthal correlations known to exist in fibre speckle
patterns and naturally accounts for the difference in spatial arrangement
between input and speckle patterns. In addition, we use a second
post-processing network to remove circular artifacts, fill gaps, and sharpen
the images, which is required due to the nature of optical fibre transmission.
This two stage approach allows for the inspection of the predicted images
produced by the more robust physically motivated equivariant model, which could
be useful in a safety-critical application, or by the output of both models,
which produces high quality images. Further, this model can scale to previously
unachievable resolutions of imaging with multi-mode optical fibres and is
demonstrated on $256 \times 256$ pixel images. This is a result of improving
the trainable parameter requirement from $\mathcal{O}(N^4)$ to
$\mathcal{O}(m)$, where $N$ is pixel size and $m$ is number of fibre modes.
Finally, this model generalises to new images, outside of the set of training
data classes, better than previous models.
Related papers
- DEEM: Diffusion Models Serve as the Eyes of Large Language Models for Image Perception [66.88792390480343]
We propose DEEM, a simple and effective approach that utilizes the generative feedback of diffusion models to align the semantic distributions of the image encoder.
DEEM exhibits enhanced robustness and a superior capacity to alleviate hallucinations while utilizing fewer trainable parameters, less pre-training data, and a smaller base model size.
arXiv Detail & Related papers (2024-05-24T05:46:04Z) - I$^2$SB: Image-to-Image Schr\"odinger Bridge [87.43524087956457]
Image-to-Image Schr"odinger Bridge (I$2$SB) is a new class of conditional diffusion models.
I$2$SB directly learns the nonlinear diffusion processes between two given distributions.
We show that I$2$SB surpasses standard conditional diffusion models with more interpretable generative processes.
arXiv Detail & Related papers (2023-02-12T08:35:39Z) - Uncovering the Disentanglement Capability in Text-to-Image Diffusion
Models [60.63556257324894]
A key desired property of image generative models is the ability to disentangle different attributes.
We propose a simple, light-weight image editing algorithm where the mixing weights of the two text embeddings are optimized for style matching and content preservation.
Experiments show that the proposed method can modify a wide range of attributes, with the performance outperforming diffusion-model-based image-editing algorithms.
arXiv Detail & Related papers (2022-12-16T19:58:52Z) - SinDiffusion: Learning a Diffusion Model from a Single Natural Image [159.4285444680301]
We present SinDiffusion, leveraging denoising diffusion models to capture internal distribution of patches from a single natural image.
It is based on two core designs. First, SinDiffusion is trained with a single model at a single scale instead of multiple models with progressive growing of scales.
Second, we identify that a patch-level receptive field of the diffusion network is crucial and effective for capturing the image's patch statistics.
arXiv Detail & Related papers (2022-11-22T18:00:03Z) - FAS-UNet: A Novel FAS-driven Unet to Learn Variational Image
Segmentation [3.741136641573471]
We propose a novel variational-model-informed network (FAS-Unet) that exploits the model and algorithm priors to extract the multi-scale features.
The proposed network integrates image data and mathematical models, and implements them through learning a few convolution kernels.
Experimental results show that the proposed FAS-Unet is very competitive with other state-of-the-art methods in qualitative, quantitative and model complexity evaluations.
arXiv Detail & Related papers (2022-10-27T04:15:16Z) - Meta Internal Learning [88.68276505511922]
Internal learning for single-image generation is a framework, where a generator is trained to produce novel images based on a single image.
We propose a meta-learning approach that enables training over a collection of images, in order to model the internal statistics of the sample image more effectively.
Our results show that the models obtained are as suitable as single-image GANs for many common image applications.
arXiv Detail & Related papers (2021-10-06T16:27:38Z) - RG-Flow: A hierarchical and explainable flow model based on
renormalization group and sparse prior [2.274915755738124]
Flow-based generative models have become an important class of unsupervised learning approaches.
In this work, we incorporate the key ideas of renormalization group (RG) and sparse prior distribution to design a hierarchical flow-based generative model, RG-Flow.
Our proposed method has $O(log L)$ complexity for inpainting of an image with edge length $L$, compared to previous generative models with $O(L2)$ complexity.
arXiv Detail & Related papers (2020-09-30T18:04:04Z) - Locally Masked Convolution for Autoregressive Models [107.4635841204146]
LMConv is a simple modification to the standard 2D convolution that allows arbitrary masks to be applied to the weights at each location in the image.
We learn an ensemble of distribution estimators that share parameters but differ in generation order, achieving improved performance on whole-image density estimation.
arXiv Detail & Related papers (2020-06-22T17:59:07Z) - Efficient and Model-Based Infrared and Visible Image Fusion Via
Algorithm Unrolling [24.83209572888164]
Infrared and visible image fusion (IVIF) expects to obtain images that retain thermal radiation information from infrared images and texture details from visible images.
A model-based convolutional neural network (CNN) model is proposed to overcome the shortcomings of traditional CNN-based IVIF models.
arXiv Detail & Related papers (2020-05-12T16:15:56Z) - Concurrently Extrapolating and Interpolating Networks for Continuous
Model Generation [34.72650269503811]
We propose a simple yet effective model generation strategy to form a sequence of models that only requires a set of specific-effect label images.
We show that the proposed method is capable of producing a series of continuous models and achieves better performance than that of several state-of-the-art methods for image smoothing.
arXiv Detail & Related papers (2020-01-12T04:44:44Z) - A Two-step-training Deep Learning Framework for Real-time Computational
Imaging without Physics Priors [0.0]
We propose a two-step-training DL (TST-DL) framework for real-time computational imaging without physics priors.
First, a single fully-connected layer (FCL) is trained to directly learn the model.
Then, this FCL is fixed and fixed with an un-trained U-Net architecture for a second-step training to improve the output image fidelity.
arXiv Detail & Related papers (2020-01-10T15:05:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.