Multi Camera Placement via Z-buffer Rendering for the Optimization of
the Coverage and the Visual Hull
- URL: http://arxiv.org/abs/2103.11211v1
- Date: Sat, 20 Mar 2021 17:04:00 GMT
- Title: Multi Camera Placement via Z-buffer Rendering for the Optimization of
the Coverage and the Visual Hull
- Authors: Maria L. H\"anel and Johannes V\"olkel and Dominik Henrich
- Abstract summary: A failure safe system needs to optimally cover the important areas of the robot work cell with safety overlap.
We propose an efficient algorithm for optimally placing and orienting the cameras in a 3D CAD model of the work cell.
The simulation allows to evaluate the quality with respect to the distortion of images and advanced image analysis in the presence of static and dynamic visual obstacles.
- Score: 2.642698101441705
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We can only allow human-robot-cooperation in a common work cell if the human
integrity is guaranteed. A surveillance system with multiple cameras can detect
collisions without contact to the human collaborator. A failure safe system
needs to optimally cover the important areas of the robot work cell with safety
overlap. We propose an efficient algorithm for optimally placing and orienting
the cameras in a 3D CAD model of the work cell. In order to evaluate the
quality of the camera constellation in each step, our method simulates the
vision system using a z-buffer rendering technique for image acquisition, a
voxel space for the overlap and a refined visual hull method for a conservative
human reconstruction. The simulation allows to evaluate the quality with
respect to the distortion of images and advanced image analysis in the presence
of static and dynamic visual obstacles such as tables, racks, walls, robots and
people. Our method is ideally suited for maximizing the coverage of multiple
cameras or minimizing an error made by the visual hull and can be extended to
probabilistic space carving.
Related papers
- Redundancy-Aware Camera Selection for Indoor Scene Neural Rendering [54.468355408388675]
We build a similarity matrix that incorporates both the spatial diversity of the cameras and the semantic variation of the images.
We apply a diversity-based sampling algorithm to optimize the camera selection.
We also develop a new dataset, IndoorTraj, which includes long and complex camera movements captured by humans in virtual indoor environments.
arXiv Detail & Related papers (2024-09-11T08:36:49Z) - Multi-Camera Hand-Eye Calibration for Human-Robot Collaboration in Industrial Robotic Workcells [3.76054468268713]
In industrial scenarios, effective human-robot collaboration relies on multi-camera systems to robustly monitor human operators.
We introduce an innovative and robust multi-camera hand-eye calibration method, designed to optimize each camera's pose relative to both the robot's base and to each other camera.
We demonstrate the superior performance of our method through comprehensive experiments employing the METRIC dataset and real-world data collected on industrial scenarios.
arXiv Detail & Related papers (2024-06-17T10:23:30Z) - VICAN: Very Efficient Calibration Algorithm for Large Camera Networks [49.17165360280794]
We introduce a novel methodology that extends Pose Graph Optimization techniques.
We consider the bipartite graph encompassing cameras, object poses evolving dynamically, and camera-object relative transformations at each time step.
Our framework retains compatibility with traditional PGO solvers, but its efficacy benefits from a custom-tailored optimization scheme.
arXiv Detail & Related papers (2024-03-25T17:47:03Z) - Learning Robust Multi-Scale Representation for Neural Radiance Fields
from Unposed Images [65.41966114373373]
We present an improved solution to the neural image-based rendering problem in computer vision.
The proposed approach could synthesize a realistic image of the scene from a novel viewpoint at test time.
arXiv Detail & Related papers (2023-11-08T08:18:23Z) - Towards Scalable Multi-View Reconstruction of Geometry and Materials [27.660389147094715]
We propose a novel method for joint recovery of camera pose, object geometry and spatially-varying Bidirectional Reflectance Distribution Function (svBRDF) of 3D scenes.
The input are high-resolution RGBD images captured by a mobile, hand-held capture system with point lights for active illumination.
arXiv Detail & Related papers (2023-06-06T15:07:39Z) - A Distance-Geometric Method for Recovering Robot Joint Angles From an
RGB Image [7.971699294672282]
We present a novel method for retrieving the joint angles of a robot manipulator using only a single RGB image of its current configuration.
Our approach, based on a distance-geometric representation of the configuration space, exploits the knowledge of a robot's kinematic model.
arXiv Detail & Related papers (2023-01-05T12:57:45Z) - COPILOT: Human-Environment Collision Prediction and Localization from
Egocentric Videos [62.34712951567793]
The ability to forecast human-environment collisions from egocentric observations is vital to enable collision avoidance in applications such as VR, AR, and wearable assistive robotics.
We introduce the challenging problem of predicting collisions in diverse environments from multi-view egocentric videos captured from body-mounted cameras.
We propose a transformer-based model called COPILOT to perform collision prediction and localization simultaneously.
arXiv Detail & Related papers (2022-10-04T17:49:23Z) - DeepMultiCap: Performance Capture of Multiple Characters Using Sparse
Multiview Cameras [63.186486240525554]
DeepMultiCap is a novel method for multi-person performance capture using sparse multi-view cameras.
Our method can capture time varying surface details without the need of using pre-scanned template models.
arXiv Detail & Related papers (2021-05-01T14:32:13Z) - Nothing But Geometric Constraints: A Model-Free Method for Articulated
Object Pose Estimation [89.82169646672872]
We propose an unsupervised vision-based system to estimate the joint configurations of the robot arm from a sequence of RGB or RGB-D images without knowing the model a priori.
We combine a classical geometric formulation with deep learning and extend the use of epipolar multi-rigid-body constraints to solve this task.
arXiv Detail & Related papers (2020-11-30T20:46:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.