Active Perception with A Monocular Camera for Multiscopic Vision
- URL: http://arxiv.org/abs/2001.08212v1
- Date: Wed, 22 Jan 2020 08:46:45 GMT
- Title: Active Perception with A Monocular Camera for Multiscopic Vision
- Authors: Weihao Yuan, Rui Fan, Michael Yu Wang, and Qifeng Chen
- Abstract summary: We design a multiscopic vision system that utilizes a low-cost monocular RGB camera to acquire accurate depth estimation for robotic applications.
Unlike multi-view stereo with images captured at unconstrained camera poses, the proposed system actively controls a robot arm with a mounted camera to capture a sequence of images in horizontally or vertically aligned positions with the same parallax.
- Score: 50.370074098619185
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We design a multiscopic vision system that utilizes a low-cost monocular RGB
camera to acquire accurate depth estimation for robotic applications. Unlike
multi-view stereo with images captured at unconstrained camera poses, the
proposed system actively controls a robot arm with a mounted camera to capture
a sequence of images in horizontally or vertically aligned positions with the
same parallax. In this system, we combine the cost volumes for stereo matching
between the reference image and the surrounding images to form a fused cost
volume that is robust to outliers. Experiments on the Middlebury dataset and
real robot experiments show that our obtained disparity maps are more accurate
than two-frame stereo matching: the average absolute error is reduced by 50.2%
in our experiments.
Related papers
- Redundancy-Aware Camera Selection for Indoor Scene Neural Rendering [54.468355408388675]
We build a similarity matrix that incorporates both the spatial diversity of the cameras and the semantic variation of the images.
We apply a diversity-based sampling algorithm to optimize the camera selection.
We also develop a new dataset, IndoorTraj, which includes long and complex camera movements captured by humans in virtual indoor environments.
arXiv Detail & Related papers (2024-09-11T08:36:49Z) - SDGE: Stereo Guided Depth Estimation for 360$^\circ$ Camera Sets [65.64958606221069]
Multi-camera systems are often used in autonomous driving to achieve a 360$circ$ perception.
These 360$circ$ camera sets often have limited or low-quality overlap regions, making multi-view stereo methods infeasible for the entire image.
We propose the Stereo Guided Depth Estimation (SGDE) method, which enhances depth estimation of the full image by explicitly utilizing multi-view stereo results on the overlap.
arXiv Detail & Related papers (2024-02-19T02:41:37Z) - Depth Estimation Analysis of Orthogonally Divergent Fisheye Cameras with
Distortion Removal [0.0]
Traditional stereo vision systems may not be suitable for certain scenarios due to their limited field of view.
Fisheye cameras introduce significant distortion at the edges that affects the accuracy of stereo matching and depth estimation.
This paper proposes a method for distortion-removal and depth estimation analysis for stereovision system.
arXiv Detail & Related papers (2023-07-07T13:44:12Z) - SurroundDepth: Entangling Surrounding Views for Self-Supervised
Multi-Camera Depth Estimation [101.55622133406446]
We propose a SurroundDepth method to incorporate the information from multiple surrounding views to predict depth maps across cameras.
Specifically, we employ a joint network to process all the surrounding views and propose a cross-view transformer to effectively fuse the information from multiple views.
In experiments, our method achieves the state-of-the-art performance on the challenging multi-camera depth estimation datasets.
arXiv Detail & Related papers (2022-04-07T17:58:47Z) - Depth Estimation by Combining Binocular Stereo and Monocular
Structured-Light [29.226203202113613]
We present a novel stereo system, which consists of two cameras (an RGB camera and an IR camera) and an IR speckle projector.
The RGB camera is used both for depth estimation and texture acquisition.
The depth map generated by the MSL subsystem can provide external guidance for the stereo matching networks.
arXiv Detail & Related papers (2022-03-20T08:46:37Z) - MFuseNet: Robust Depth Estimation with Learned Multiscopic Fusion [47.2251122861135]
We design a multiscopic vision system that utilizes a low-cost monocular RGB camera to acquire accurate depth estimation.
Unlike multi-view stereo with images captured at unconstrained camera poses, the proposed system controls the motion of a camera to capture a sequence of images.
arXiv Detail & Related papers (2021-08-05T08:31:01Z) - A Multi-spectral Dataset for Evaluating Motion Estimation Systems [7.953825491774407]
This paper presents a novel dataset for evaluating the performance of multi-spectral motion estimation systems.
All the sequences are recorded from a handheld multi-spectral device.
The depth images are captured by a Microsoft Kinect2 and can have benefits for learning cross-modalities stereo matching.
arXiv Detail & Related papers (2020-07-01T17:11:02Z) - Multi-View Photometric Stereo: A Robust Solution and Benchmark Dataset
for Spatially Varying Isotropic Materials [65.95928593628128]
We present a method to capture both 3D shape and spatially varying reflectance with a multi-view photometric stereo technique.
Our algorithm is suitable for perspective cameras and nearby point light sources.
arXiv Detail & Related papers (2020-01-18T12:26:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.