Towards Real-time Video Compressive Sensing on Mobile Devices
- URL: http://arxiv.org/abs/2408.07530v1
- Date: Wed, 14 Aug 2024 13:03:31 GMT
- Title: Towards Real-time Video Compressive Sensing on Mobile Devices
- Authors: Miao Cao, Lishun Wang, Huan Wang, Guoqing Wang, Xin Yuan,
- Abstract summary: Video Snapshot Compressive Imaging (SCI) uses a low-speed 2D camera to capture high-speed scenes as snapshot compressed measurements.
We present an effective approach for video SCI reconstruction, dubbed MobileSCI, which can run at real-time speed on the mobile devices.
- Score: 18.96331666620252
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Video Snapshot Compressive Imaging (SCI) uses a low-speed 2D camera to capture high-speed scenes as snapshot compressed measurements, followed by a reconstruction algorithm to retrieve the high-speed video frames. The fast evolving mobile devices and existing high-performance video SCI reconstruction algorithms motivate us to develop mobile reconstruction methods for real-world applications. Yet, it is still challenging to deploy previous reconstruction algorithms on mobile devices due to the complex inference process, let alone real-time mobile reconstruction. To the best of our knowledge, there is no video SCI reconstruction model designed to run on the mobile devices. Towards this end, in this paper, we present an effective approach for video SCI reconstruction, dubbed MobileSCI, which can run at real-time speed on the mobile devices for the first time. Specifically, we first build a U-shaped 2D convolution-based architecture, which is much more efficient and mobile-friendly than previous state-of-the-art reconstruction methods. Besides, an efficient feature mixing block, based on the channel splitting and shuffling mechanisms, is introduced as a novel bottleneck block of our proposed MobileSCI to alleviate the computational burden. Finally, a customized knowledge distillation strategy is utilized to further improve the reconstruction quality. Extensive results on both simulated and real data show that our proposed MobileSCI can achieve superior reconstruction quality with high efficiency on the mobile devices. Particularly, we can reconstruct a 256 X 256 X 8 snapshot compressed measurement with real-time performance (about 35 FPS) on an iPhone 15. Code is available at https://github.com/mcao92/MobileSCI.
Related papers
- Large Motion Video Autoencoding with Cross-modal Video VAE [52.13379965800485]
Video Variational Autoencoder (VAE) is essential for reducing video redundancy and facilitating efficient video generation.
Existing Video VAEs have begun to address temporal compression; however, they often suffer from inadequate reconstruction performance.
We present a novel and powerful video autoencoder capable of high-fidelity video encoding.
arXiv Detail & Related papers (2024-12-23T18:58:24Z) - CompactFlowNet: Efficient Real-time Optical Flow Estimation on Mobile Devices [19.80162591240214]
We present CompactFlowNet, the first real-time mobile neural network for optical flow prediction.
Optical flow serves as a fundamental building block for various video-related tasks, such as video restoration, motion estimation, video stabilization, object tracking, action recognition, and video generation.
arXiv Detail & Related papers (2024-12-17T19:06:12Z) - V^3: Viewing Volumetric Videos on Mobiles via Streamable 2D Dynamic Gaussians [53.614560799043545]
V3 (Viewing Volumetric Videos) is a novel approach that enables high-quality mobile rendering through the streaming of dynamic Gaussians.
Our key innovation is to view dynamic 3DGS as 2D videos, facilitating the use of hardware video codecs.
As the first to stream dynamic Gaussians on mobile devices, our companion player offers users an unprecedented volumetric video experience.
arXiv Detail & Related papers (2024-09-20T16:54:27Z) - Deep Optics for Video Snapshot Compressive Imaging [10.830072985735175]
Video snapshot imaging (SCI) aims to capture a sequence of video frames with only a single shot of a 2D detector.
This paper presents a framework to jointly optimize masks and a reconstruction network.
We believe this is a milestone for real-world video SCI.
arXiv Detail & Related papers (2024-04-08T08:04:44Z) - Snap Video: Scaled Spatiotemporal Transformers for Text-to-Video
Synthesis [69.83405335645305]
We argue that naively bringing advances of image models to the video generation domain reduces motion fidelity, visual quality and impairs scalability.
In this work, we build Snap Video, a video-first model that systematically addresses these challenges.
We show that a U-Net - a workhorse behind image generation - scales poorly when generating videos, requiring significant computational overhead.
This allows us to efficiently train a text-to-video model with billions of parameters for the first time, reach state-of-the-art results on a number of benchmarks, and generate videos with substantially higher quality, temporal consistency, and motion complexity.
arXiv Detail & Related papers (2024-02-22T18:55:08Z) - VNVC: A Versatile Neural Video Coding Framework for Efficient
Human-Machine Vision [59.632286735304156]
It is more efficient to enhance/analyze the coded representations directly without decoding them into pixels.
We propose a versatile neural video coding (VNVC) framework, which targets learning compact representations to support both reconstruction and direct enhancement/analysis.
arXiv Detail & Related papers (2023-06-19T03:04:57Z) - EfficientSCI: Densely Connected Network with Space-time Factorization
for Large-scale Video Snapshot Compressive Imaging [6.8372546605486555]
We show that an UHD color video with high compression ratio can be reconstructed from a snapshot 2D measurement using a single end-to-end deep learning model with PSNR above 32 dB.
Our method significantly outperforms all previous SOTA algorithms with better real-time performance.
arXiv Detail & Related papers (2023-05-17T07:28:46Z) - Perceptual Image Enhancement for Smartphone Real-Time Applications [60.45737626529091]
We propose LPIENet, a lightweight network for perceptual image enhancement.
Our model can deal with noise artifacts, diffraction artifacts, blur, and HDR overexposure.
Our model can process 2K resolution images under 1 second in mid-level commercial smartphones.
arXiv Detail & Related papers (2022-10-24T19:16:33Z) - 10-mega pixel snapshot compressive imaging with a hybrid coded aperture [48.95666098332693]
High resolution images are widely used in our daily life, whereas high-speed video capture is challenging due to the low frame rate of cameras working at the high resolution mode.
snapshot imaging (SCI) was proposed as a solution to the low throughput of existing imaging systems.
arXiv Detail & Related papers (2021-06-30T01:09:24Z) - Memory-Efficient Network for Large-scale Video Compressive Sensing [21.040260603729227]
Video snapshot imaging (SCI) captures a sequence of video frames in a single shot using a 2D detector.
In this paper, we develop a memory-efficient network for large-scale video SCI based on multi-group reversible 3D convolutional neural networks.
arXiv Detail & Related papers (2021-03-04T15:14:58Z) - Plug-and-Play Algorithms for Video Snapshot Compressive Imaging [41.818167109996885]
We consider the reconstruction problem of snapshot video imaging (SCI) using a low-speed 2D sensor (detector)
The underlying principle SCI is to modulate frames with different masks and then encoded frames are integrated into a snapshot on the sensor.
Applying SCI to largescale problems (HD or UHD videos) in our daily life is still challenging one bottlenecks lies in the reconstruction algorithm.
arXiv Detail & Related papers (2021-01-13T00:51:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.