Small-brain neural networks rapidly solve inverse problems with vortex
Fourier encoders
- URL: http://arxiv.org/abs/2005.07682v1
- Date: Fri, 15 May 2020 17:53:32 GMT
- Title: Small-brain neural networks rapidly solve inverse problems with vortex
Fourier encoders
- Authors: Baurzhan Muminov and Luat T. Vuong
- Abstract summary: We introduce a vortex phase transform with a lenslet-array to accompany shallow, dense, small-brain'' neural networks for high-speed and low-light imaging.
With vortex spatial encoding, a small brain is trained to deconvolve images at rates 5-20 times faster than those achieved with random encoding schemes.
We reconstruct MNIST Fashion objects illuminated with low-light flux at a rate of several thousand frames per second on a 15 W central processing unit.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a vortex phase transform with a lenslet-array to accompany
shallow, dense, ``small-brain'' neural networks for high-speed and low-light
imaging. Our single-shot ptychographic approach exploits the coherent
diffraction, compact representation, and edge enhancement of Fourier-tranformed
spiral-phase gradients. With vortex spatial encoding, a small brain is trained
to deconvolve images at rates 5-20 times faster than those achieved with random
encoding schemes, where greater advantages are gained in the presence of noise.
Once trained, the small brain reconstructs an object from intensity-only data,
solving an inverse mapping without performing iterations on each image and
without deep-learning schemes. With this hybrid, optical-digital, vortex
Fourier encoded, small-brain scheme, we reconstruct MNIST Fashion objects
illuminated with low-light flux (5 nJ/cm$^2$) at a rate of several thousand
frames per second on a 15 W central processing unit, two orders of magnitude
faster than convolutional neural networks.
Related papers
- Sign-Coded Exposure Sensing for Noise-Robust High-Speed Imaging [16.58669052286989]
We present a novel optical compression of high-speed frames employing pixel-level sign-coded exposure.
Walsh functions ensure that the noise is not amplified during high-speed frame reconstruction.
Our hardware prototype demonstrated the reconstruction of 4kHz frames of a moving scene lit by ambient light only.
arXiv Detail & Related papers (2023-05-05T01:03:37Z) - Time-lapse image classification using a diffractive neural network [0.0]
We show for the first time a time-lapse image classification scheme using a diffractive network.
We show a blind testing accuracy of 62.03% on the optical classification of objects from the CIFAR-10 dataset.
This constitutes the highest inference accuracy achieved so far using a single diffractive network.
arXiv Detail & Related papers (2022-08-23T08:16:30Z) - PREF: Phasorial Embedding Fields for Compact Neural Representations [54.44527545923917]
We present a phasorial embedding field emphPREF as a compact representation to facilitate neural signal modeling and reconstruction tasks.
Our experiments show PREF-based neural signal processing technique is on par with the state-of-the-art in 2D image completion, 3D SDF surface regression, and 5D radiance field reconstruction.
arXiv Detail & Related papers (2022-05-26T17:43:03Z) - Instant Neural Graphics Primitives with a Multiresolution Hash Encoding [67.33850633281803]
We present a versatile new input encoding that permits the use of a smaller network without sacrificing quality.
A small neural network is augmented by a multiresolution hash table of trainable feature vectors whose values are optimized through a gradient descent.
We achieve a combined speed of several orders of magnitude, enabling training of high-quality neural graphics primitives in a matter of seconds.
arXiv Detail & Related papers (2022-01-16T07:22:47Z) - InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering [55.70938412352287]
We present an information-theoretic regularization technique for few-shot novel view synthesis based on neural implicit representation.
The proposed approach minimizes potential reconstruction inconsistency that happens due to insufficient viewpoints.
We achieve consistently improved performance compared to existing neural view synthesis methods by large margins on multiple standard benchmarks.
arXiv Detail & Related papers (2021-12-31T11:56:01Z) - Small Lesion Segmentation in Brain MRIs with Subpixel Embedding [105.1223735549524]
We present a method to segment MRI scans of the human brain into ischemic stroke lesion and normal tissues.
We propose a neural network architecture in the form of a standard encoder-decoder where predictions are guided by a spatial expansion embedding network.
arXiv Detail & Related papers (2021-09-18T00:21:17Z) - Dual-view Snapshot Compressive Imaging via Optical Flow Aided Recurrent
Neural Network [14.796204921975733]
Dual-view snapshot compressive imaging (SCI) aims to capture videos from two field-of-views (FoVs) in a single snapshot.
It is challenging for existing model-based decoding algorithms to reconstruct each individual scene.
We propose an optical flow-aided recurrent neural network for dual video SCI systems, which provides high-quality decoding in seconds.
arXiv Detail & Related papers (2021-09-11T14:24:44Z) - Spatially-Adaptive Pixelwise Networks for Fast Image Translation [57.359250882770525]
We introduce a new generator architecture, aimed at fast and efficient high-resolution image-to-image translation.
We use pixel-wise networks; that is, each pixel is processed independently of others.
Our model is up to 18x faster than state-of-the-art baselines.
arXiv Detail & Related papers (2020-12-05T10:02:03Z) - 11 TeraFLOPs per second photonic convolutional accelerator for deep
learning optical neural networks [0.0]
We demonstrate a universal optical vector convolutional accelerator operating beyond 10 TeraFLOPS (floating point operations per second)
We then use the same hardware to sequentially form a deep optical CNN with ten output neurons, achieving successful recognition of full 10 digits with 900 pixel handwritten digit images with 88% accuracy.
This approach is scalable and trainable to much more complex networks for demanding applications such as unmanned vehicle and real-time video recognition.
arXiv Detail & Related papers (2020-11-14T21:24:01Z) - Neural Sparse Representation for Image Restoration [116.72107034624344]
Inspired by the robustness and efficiency of sparse coding based image restoration models, we investigate the sparsity of neurons in deep networks.
Our method structurally enforces sparsity constraints upon hidden neurons.
Experiments show that sparse representation is crucial in deep neural networks for multiple image restoration tasks.
arXiv Detail & Related papers (2020-06-08T05:15:17Z) - u-net CNN based fourier ptychography [5.46367622374939]
We propose a new retrieval algorithm that is based on convolutional neural networks.
Experiments demonstrate that our model achieves better reconstruction results and is more robust under system aberrations.
arXiv Detail & Related papers (2020-03-16T22:48:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.