CamTuner: Reinforcement-Learning based System for Camera Parameter
Tuning to enhance Analytics
- URL: http://arxiv.org/abs/2107.03964v1
- Date: Thu, 8 Jul 2021 16:43:02 GMT
- Title: CamTuner: Reinforcement-Learning based System for Camera Parameter
Tuning to enhance Analytics
- Authors: Sibendu Paul, Kunal Rao, Giuseppe Coviello, Murugan Sankaradas, Oliver
Po, Y. Charlie Hu, Srimat T. Chakradhar
- Abstract summary: CamTuner is a system to automatically adapt the complex sensor to changing environments.
Our dynamic tuning approach results in up to 12% improvement in the accuracy of insights from several video analytics tasks.
- Score: 2.637265703777453
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Complex sensors like video cameras include tens of configurable parameters,
which can be set by end-users to customize the sensors to specific application
scenarios. Although parameter settings significantly affect the quality of the
sensor output and the accuracy of insights derived from sensor data, most
end-users use a fixed parameter setting because they lack the skill or
understanding to appropriately configure these parameters. We propose CamTuner,
which is a system to automatically, and dynamically adapt the complex sensor to
changing environments. CamTuner includes two key components. First, a bespoke
analytics quality estimator, which is a deep-learning model to automatically
and continuously estimate the quality of insights from an analytics unit as the
environment around a sensor change. Second, a reinforcement learning (RL)
module, which reacts to the changes in quality, and automatically adjusts the
camera parameters to enhance the accuracy of insights. We improve the training
time of the RL module by an order of magnitude by designing virtual models to
mimic essential behavior of the camera: we design virtual knobs that can be set
to different values to mimic the effects of assigning different values to the
camera's configurable parameters, and we design a virtual camera model that
mimics the output from a video camera at different times of the day. These
virtual models significantly accelerate training because (a) frame rates from a
real camera are limited to 25-30 fps while the virtual models enable processing
at 300 fps, (b) we do not have to wait until the real camera sees different
environments, which could take weeks or months, and (c) virtual knobs can be
updated instantly, while it can take 200-500 ms to change the camera parameter
settings. Our dynamic tuning approach results in up to 12% improvement in the
accuracy of insights from several video analytics tasks.
Related papers
- MSSIDD: A Benchmark for Multi-Sensor Denoising [55.41612200877861]
We introduce a new benchmark, the Multi-Sensor SIDD dataset, which is the first raw-domain dataset designed to evaluate the sensor transferability of denoising models.
We propose a sensor consistency training framework that enables denoising models to learn the sensor-invariant features.
arXiv Detail & Related papers (2024-11-18T13:32:59Z) - Redundancy-Aware Camera Selection for Indoor Scene Neural Rendering [54.468355408388675]
We build a similarity matrix that incorporates both the spatial diversity of the cameras and the semantic variation of the images.
We apply a diversity-based sampling algorithm to optimize the camera selection.
We also develop a new dataset, IndoorTraj, which includes long and complex camera movements captured by humans in virtual indoor environments.
arXiv Detail & Related papers (2024-09-11T08:36:49Z) - LiFCal: Online Light Field Camera Calibration via Bundle Adjustment [38.2887165481751]
LiFCal is an online calibration pipeline for MLA-based light field cameras.
It accurately determines model parameters from a moving camera sequence without precise calibration targets.
It can be applied in a target-free scene, and it is implemented online in a complete and continuous pipeline.
arXiv Detail & Related papers (2024-08-21T15:04:49Z) - CamP: Camera Preconditioning for Neural Radiance Fields [56.46526219931002]
NeRFs can be optimized to obtain high-fidelity 3D scene reconstructions of objects and large-scale scenes.
Extrinsic and intrinsic camera parameters are usually estimated using Structure-from-Motion (SfM) methods as a pre-processing step to NeRF.
We propose using a proxy problem to compute a whitening transform that eliminates the correlation between camera parameters and normalizes their effects.
arXiv Detail & Related papers (2023-08-21T17:59:54Z) - NeRFtrinsic Four: An End-To-End Trainable NeRF Jointly Optimizing
Diverse Intrinsic and Extrinsic Camera Parameters [7.165373389474194]
Novel view synthesis using neural radiance fields (NeRF) is the state-of-the-art technique for generating high-quality images from novel viewpoints.
Current research on the joint optimization of camera parameters and NeRF focuses on refining noisy extrinsic camera parameters.
We propose a novel end-to-end trainable approach called NeRFtrinsic Four to address these limitations.
arXiv Detail & Related papers (2023-03-16T15:44:31Z) - Toward Global Sensing Quality Maximization: A Configuration Optimization
Scheme for Camera Networks [15.795407587722924]
We investigate the reconfiguration strategy for the parameterized camera network model.
We form a single quantity that measures the sensing quality of the targets by the camera network.
We verify the effectiveness of our approach through extensive simulations and experiments.
arXiv Detail & Related papers (2022-11-28T09:21:47Z) - APT: Adaptive Perceptual quality based camera Tuning using reinforcement
learning [2.0741583844039915]
Capturing poor-quality video adversely affects the accuracy of analytics.
We propose a novel, reinforcement-learning based system that tunes the camera parameters to ensure a high-quality video capture.
As a result, such tuning restores the accuracy of insights when environmental conditions or scene content change.
arXiv Detail & Related papers (2022-11-15T21:02:48Z) - Extrinsic Camera Calibration with Semantic Segmentation [60.330549990863624]
We present an extrinsic camera calibration approach that automatizes the parameter estimation by utilizing semantic segmentation information.
Our approach relies on a coarse initial measurement of the camera pose and builds on lidar sensors mounted on a vehicle.
We evaluate our method on simulated and real-world data to demonstrate low error measurements in the calibration results.
arXiv Detail & Related papers (2022-08-08T07:25:03Z) - Enhanced Frame and Event-Based Simulator and Event-Based Video
Interpolation Network [1.4095425725284465]
We present a new, advanced event simulator that can produce realistic scenes recorded by a camera rig with an arbitrary number of sensors located at fixed offsets.
It includes a new frame-based image sensor model with realistic image quality reduction effects, and an extended DVS model with more accurate characteristics.
We show that data generated by our simulator can be used to train our new model, leading to reconstructed images on public datasets of equivalent or better quality than the state of the art.
arXiv Detail & Related papers (2021-12-17T08:27:13Z) - FLEX: Parameter-free Multi-view 3D Human Motion Reconstruction [70.09086274139504]
Multi-view algorithms strongly depend on camera parameters, in particular, the relative positions among the cameras.
We introduce FLEX, an end-to-end parameter-free multi-view model.
We demonstrate results on the Human3.6M and KTH Multi-view Football II datasets.
arXiv Detail & Related papers (2021-05-05T09:08:12Z) - Redesigning SLAM for Arbitrary Multi-Camera Systems [51.81798192085111]
Adding more cameras to SLAM systems improves robustness and accuracy but complicates the design of the visual front-end significantly.
In this work, we aim at an adaptive SLAM system that works for arbitrary multi-camera setups.
We adapt a state-of-the-art visual-inertial odometry with these modifications, and experimental results show that the modified pipeline can adapt to a wide range of camera setups.
arXiv Detail & Related papers (2020-03-04T11:44:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.