Weighted Encoding Optimization for Dynamic Single-pixel Imaging and
Sensing
- URL: http://arxiv.org/abs/2201.02833v1
- Date: Sat, 8 Jan 2022 14:11:22 GMT
- Title: Weighted Encoding Optimization for Dynamic Single-pixel Imaging and
Sensing
- Authors: Xinrui Zhan, Liheng Bian, Chunli Zhu, Jun Zhang
- Abstract summary: We report a weighted optimization technique for dynamic rate-adaptive single-pixel imaging and sensing.
Experiments on the MNIST dataset validated that once the network is trained with a sampling rate of 1, the average imaging PSNR reaches 23.50 dB at 0.1 sampling rate.
- Score: 5.009136541766621
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Using single-pixel detection, the end-to-end neural network that jointly
optimizes both encoding and decoding enables high-precision imaging and
high-level semantic sensing. However, for varied sampling rates, the
large-scale network requires retraining that is laboursome and
computation-consuming. In this letter, we report a weighted optimization
technique for dynamic rate-adaptive single-pixel imaging and sensing, which
only needs to train the network for one time that is available for any sampling
rates. Specifically, we introduce a novel weighting scheme in the encoding
process to characterize different patterns' modulation efficiency. While the
network is training at a high sampling rate, the modulation patterns and
corresponding weights are updated iteratively, which produces optimal ranked
encoding series when converged. In the experimental implementation, the optimal
pattern series with the highest weights are employed for light modulation, thus
achieving highly-efficient imaging and sensing. The reported strategy saves the
additional training of another low-rate network required by the existing
dynamic single-pixel networks, which further doubles training efficiency.
Experiments on the MNIST dataset validated that once the network is trained
with a sampling rate of 1, the average imaging PSNR reaches 23.50 dB at 0.1
sampling rate, and the image-free classification accuracy reaches up to 95.00\%
at a sampling rate of 0.03 and 97.91\% at a sampling rate of 0.1.
Related papers
- Efficient NeRF Optimization -- Not All Samples Remain Equally Hard [9.404889815088161]
We propose an application of online hard sample mining for efficient training of Neural Radiance Fields (NeRF)
NeRF models produce state-of-the-art quality for many 3D reconstruction and rendering tasks but require substantial computational resources.
arXiv Detail & Related papers (2024-08-06T13:49:01Z) - Direct Zernike Coefficient Prediction from Point Spread Functions and Extended Images using Deep Learning [36.136619420474766]
Existing adaptive optics systems rely on iterative search algorithm to correct for aberrations and improve images.
This study demonstrates the application of convolutional neural networks to characterise the optical aberration.
arXiv Detail & Related papers (2024-04-23T17:03:53Z) - MB-RACS: Measurement-Bounds-based Rate-Adaptive Image Compressed Sensing Network [65.1004435124796]
We propose a Measurement-Bounds-based Rate-Adaptive Image Compressed Sensing Network (MB-RACS) framework.
Our experiments demonstrate that the proposed MB-RACS method surpasses current leading methods.
arXiv Detail & Related papers (2024-01-19T04:40:20Z) - RL-based Stateful Neural Adaptive Sampling and Denoising for Real-Time
Path Tracing [1.534667887016089]
MonteCarlo path tracing is a powerful technique for realistic image synthesis but suffers from high levels of noise at low sample counts.
We propose a framework with end-to-end training of a sampling importance network, a latent space encoder network, and a denoiser network.
arXiv Detail & Related papers (2023-10-05T12:39:27Z) - Compression with Bayesian Implicit Neural Representations [16.593537431810237]
We propose overfitting variational neural networks to the data and compressing an approximate posterior weight sample using relative entropy coding instead of quantizing and entropy coding it.
Experiments show that our method achieves strong performance on image and audio compression while retaining simplicity.
arXiv Detail & Related papers (2023-05-30T16:29:52Z) - A Deep Learning-based in silico Framework for Optimization on Retinal
Prosthetic Stimulation [3.870538485112487]
We propose a neural network-based framework to optimize the perceptions simulated by the in silico retinal implant model pulse2percept.
The pipeline consists of a trainable encoder, a pre-trained retinal implant model and a pre-trained evaluator.
arXiv Detail & Related papers (2023-02-07T16:32:05Z) - Vertical Layering of Quantized Neural Networks for Heterogeneous
Inference [57.42762335081385]
We study a new vertical-layered representation of neural network weights for encapsulating all quantized models into a single one.
We can theoretically achieve any precision network for on-demand service while only needing to train and maintain one model.
arXiv Detail & Related papers (2022-12-10T15:57:38Z) - Rate Distortion Characteristic Modeling for Neural Image Compression [59.25700168404325]
End-to-end optimization capability offers neural image compression (NIC) superior lossy compression performance.
distinct models are required to be trained to reach different points in the rate-distortion (R-D) space.
We make efforts to formulate the essential mathematical functions to describe the R-D behavior of NIC using deep network and statistical modeling.
arXiv Detail & Related papers (2021-06-24T12:23:05Z) - Deep Unfolded Recovery of Sub-Nyquist Sampled Ultrasound Image [94.42139459221784]
We propose a reconstruction method from sub-Nyquist samples in the time and spatial domain, that is based on unfolding the ISTA algorithm.
Our method allows reducing the number of array elements, sampling rate, and computational time while ensuring high quality imaging performance.
arXiv Detail & Related papers (2021-03-01T19:19:38Z) - Learning to Learn Parameterized Classification Networks for Scalable
Input Images [76.44375136492827]
Convolutional Neural Networks (CNNs) do not have a predictable recognition behavior with respect to the input resolution change.
We employ meta learners to generate convolutional weights of main networks for various input scales.
We further utilize knowledge distillation on the fly over model predictions based on different input resolutions.
arXiv Detail & Related papers (2020-07-13T04:27:25Z) - Perceptually Optimizing Deep Image Compression [53.705543593594285]
Mean squared error (MSE) and $ell_p$ norms have largely dominated the measurement of loss in neural networks.
We propose a different proxy approach to optimize image analysis networks against quantitative perceptual models.
arXiv Detail & Related papers (2020-07-03T14:33:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.