Lumosaic: Hyperspectral Video via Active Illumination and Coded-Exposure Pixels
- URL: http://arxiv.org/abs/2602.22140v1
- Date: Wed, 25 Feb 2026 17:42:44 GMT
- Title: Lumosaic: Hyperspectral Video via Active Illumination and Coded-Exposure Pixels
- Authors: Dhruv Verma, Andrew Qiu, Roberto Rangel, Ayandev Barman, Hao Yang, Chenjia Hu, Fengqi Zhang, Roman Genov, David B. Lindell, Kiriakos N. Kutulakos, Alex Mariakakis,
- Abstract summary: Lumosaic is a compact active hyperspectral video system designed for real-time capture of dynamic scenes.<n>Our approach combines a narrowband LED array with a coded-exposure-pixel camera capable of high-speed, per-pixel exposure control.
- Score: 19.00390495006801
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present Lumosaic, a compact active hyperspectral video system designed for real-time capture of dynamic scenes. Our approach combines a narrowband LED array with a coded-exposure-pixel (CEP) camera capable of high-speed, per-pixel exposure control, enabling joint encoding of scene information across space, time, and wavelength within each video frame. Unlike passive snapshot systems that divide light across multiple spectral channels simultaneously and assume no motion during a frame's exposure, Lumosaic actively synchronizes illumination and pixel-wise exposure, improving photon utilization and preserving spectral fidelity under motion. A learning-based reconstruction pipeline then recovers 31-channel hyperspectral (400-700 nm) video at 30 fps and VGA resolution, producing temporally coherent and spectrally accurate reconstructions. Experiments on synthetic and real data demonstrate that Lumosaic significantly improves reconstruction fidelity and temporal stability over existing snapshot hyperspectral imaging systems, enabling robust hyperspectral video across diverse materials and motion conditions.
Related papers
- RocSync: Millisecond-Accurate Temporal Synchronization for Heterogeneous Camera Systems [38.099313678683224]
We present a low-cost, general-purpose synchronization method that achieves millisecond-level temporal alignment across diverse camera systems.<n>The proposed solution employs a custom-built itLED Clock that encodes time through red and infrared, allowing visual decoding of the exposure window.<n>We validate the system in large-scale surgical recordings involving over 25 heterogeneous cameras spanning both IR and RGB modalities.
arXiv Detail & Related papers (2025-11-18T22:13:06Z) - DriveGen3D: Boosting Feed-Forward Driving Scene Generation with Efficient Video Diffusion [62.589889759543446]
DriveGen3D is a novel framework for generating high-quality and highly controllable dynamic 3D driving scenes.<n>Our work bridges this methodological gap by integrating accelerated long-term video generation with large-scale dynamic scene reconstruction.
arXiv Detail & Related papers (2025-10-17T03:00:08Z) - Video Forgery Detection with Optical Flow Residuals and Spatial-Temporal Consistency [1.7061868168035932]
We propose a detection framework that leverages spatial-temporal consistency by combining RGB appearance features with optical flow residuals.<n>By integrating these complementary features, the proposed method effectively detects a wide range of forged videos.
arXiv Detail & Related papers (2025-08-01T07:51:35Z) - LightMotion: A Light and Tuning-free Method for Simulating Camera Motion in Video Generation [56.64004196498026]
LightMotion is a light and tuning-free method for simulating camera motion in video generation.<n> operating in the latent space, it eliminates additional fine-tuning, inpainting, and depth estimation.
arXiv Detail & Related papers (2025-03-09T08:28:40Z) - Low-Light Video Enhancement via Spatial-Temporal Consistent Decomposition [52.89441679581216]
Low-Light Video Enhancement (LLVE) seeks to restore dynamic or static scenes plagued by severe invisibility and noise.<n>We present an innovative video decomposition strategy that incorporates view-independent and view-dependent components.<n>Our framework consistently outperforms existing methods, establishing a new SOTA performance.
arXiv Detail & Related papers (2024-05-24T15:56:40Z) - Event-Enhanced Snapshot Compressive Videography at 10K FPS [33.20071708537498]
Video snapshot compressive imaging (SCI) encodes the target dynamic scene compactly into a snapshot and reconstructs its high-speed frame sequence afterward.
We propose a novel hybrid "intensity+event" imaging scheme by incorporating an event camera into a video SCI setup.
We achieve high-quality videography at 0.1ms time intervals with a low-cost CMOS image sensor working at 24 FPS.
arXiv Detail & Related papers (2024-04-11T08:34:10Z) - Flying with Photons: Rendering Novel Views of Propagating Light [37.06220870989172]
We present an imaging and neural rendering technique that seeks to synthesize videos of light propagating through a scene from novel, moving camera viewpoints.
Our approach relies on a new ultrafast imaging setup to capture a first-of-its kind, multi-viewpoint video dataset with pico-second-level temporal resolution.
arXiv Detail & Related papers (2024-04-09T17:48:52Z) - Learning Spatial Adaptation and Temporal Coherence in Diffusion Models for Video Super-Resolution [151.1255837803585]
We propose a novel approach, pursuing Spatial Adaptation and Temporal Coherence (SATeCo) for video super-resolution.
SATeCo pivots on learning spatial-temporal guidance from low-resolution videos to calibrate both latent-space high-resolution video denoising and pixel-space video reconstruction.
Experiments conducted on the REDS4 and Vid4 datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2024-03-25T17:59:26Z) - Event-based Asynchronous HDR Imaging by Temporal Incident Light Modulation [54.64335350932855]
We propose a Pixel-Asynchronous HDR imaging system, based on key insights into the challenges in HDR imaging.
Our proposed Asyn system integrates the Dynamic Vision Sensors (DVS) with a set of LCD panels.
The LCD panels modulate the irradiance incident upon the DVS by altering their transparency, thereby triggering the pixel-independent event streams.
arXiv Detail & Related papers (2024-03-14T13:45:09Z) - Joint Video Multi-Frame Interpolation and Deblurring under Unknown
Exposure Time [101.91824315554682]
In this work, we aim ambitiously for a more realistic and challenging task - joint video multi-frame and deblurring under unknown exposure time.
We first adopt a variant of supervised contrastive learning to construct an exposure-aware representation from input blurred frames.
We then build our video reconstruction network upon the exposure and motion representation by progressive exposure-adaptive convolution and motion refinement.
arXiv Detail & Related papers (2023-03-27T09:43:42Z) - H2-Stereo: High-Speed, High-Resolution Stereoscopic Video System [39.95458608416292]
High-resolution stereoscopic (H2-Stereo) video allows us to perceive dynamic 3D content fine.
Existing methods provide compromised solutions that lack temporal or spatial details.
We propose a dual camera system, in which one captures high-spatial-resolution low-frame-rate (HSR-LFR) videos with rich spatial details.
We then devise a Learned Information Fusion network (LIFnet) that exploits the cross-camera redundancies to reconstruct the H2-Stereo video effectively.
arXiv Detail & Related papers (2022-08-04T04:06:01Z) - ESL: Event-based Structured Light [62.77144631509817]
Event cameras are bio-inspired sensors providing significant advantages over standard cameras.
We propose a novel structured-light system using an event camera to tackle the problem of accurate and high-speed depth sensing.
arXiv Detail & Related papers (2021-11-30T15:47:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.