OPAL: Occlusion Pattern Aware Loss for Unsupervised Light Field
Disparity Estimation
- URL: http://arxiv.org/abs/2203.02231v1
- Date: Fri, 4 Mar 2022 10:32:18 GMT
- Title: OPAL: Occlusion Pattern Aware Loss for Unsupervised Light Field
Disparity Estimation
- Authors: Peng Li, Jiayin Zhao, Jingyao Wu, Chao Deng, Haoqian Wang and Tao Yu
- Abstract summary: unsupervised methods can achieve comparable accuracy, but much higher generalization capacity and efficiency than supervised methods.
We present OPAL, which successfully extracts and encodes the general occlusion patterns inherent in the light field for loss calculation.
- Score: 22.389903710616508
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Light field disparity estimation is an essential task in computer vision with
various applications. Although supervised learning-based methods have achieved
both higher accuracy and efficiency than traditional optimization-based
methods, the dependency on ground-truth disparity for training limits the
overall generalization performance not to say for real-world scenarios where
the ground-truth disparity is hard to capture. In this paper, we argue that
unsupervised methods can achieve comparable accuracy, but, more importantly,
much higher generalization capacity and efficiency than supervised methods.
Specifically, we present the Occlusion Pattern Aware Loss, named OPAL, which
successfully extracts and encodes the general occlusion patterns inherent in
the light field for loss calculation. OPAL enables i) accurate and robust
estimation by effectively handling occlusions without using any ground-truth
information for training and ii) much efficient performance by significantly
reducing the network parameters required for accurate inference. Besides, a
transformer-based network and a refinement module are proposed for achieving
even more accurate results. Extensive experiments demonstrate our method not
only significantly improves the accuracy compared with the SOTA unsupervised
methods, but also possesses strong generalization capacity, even for real-world
data, compared with supervised methods. Our code will be made publicly
available.
Related papers
- Removing the need for ground truth UWB data collection: self-supervised ranging error correction using deep reinforcement learning [1.4061979259370274]
Multipath effects and non-line-of-sight conditions cause ranging errors between anchors and tags.
Existing approaches for mitigating these ranging errors rely on collecting large labeled datasets.
This paper proposes a novel self-supervised deep reinforcement learning approach that does not require labeled ground truth data.
arXiv Detail & Related papers (2024-03-28T09:36:55Z) - Weak Supervision Performance Evaluation via Partial Identification [46.73061437177238]
Programmatic Weak Supervision (PWS) enables supervised model training without direct access to ground truth labels.
We present a novel method to address this challenge by framing model evaluation as a partial identification problem.
Our approach derives reliable bounds on key metrics without requiring labeled data, overcoming core limitations in current weak supervision evaluation techniques.
arXiv Detail & Related papers (2023-12-07T07:15:11Z) - B-Learner: Quasi-Oracle Bounds on Heterogeneous Causal Effects Under
Hidden Confounding [51.74479522965712]
We propose a meta-learner called the B-Learner, which can efficiently learn sharp bounds on the CATE function under limits on hidden confounding.
We prove its estimates are valid, sharp, efficient, and have a quasi-oracle property with respect to the constituent estimators under more general conditions than existing methods.
arXiv Detail & Related papers (2023-04-20T18:07:19Z) - Efficient Deep Reinforcement Learning Requires Regulating Overfitting [91.88004732618381]
We show that high temporal-difference (TD) error on the validation set of transitions is the main culprit that severely affects the performance of deep RL algorithms.
We show that a simple online model selection method that targets the validation TD error is effective across state-based DMC and Gym tasks.
arXiv Detail & Related papers (2023-04-20T17:11:05Z) - Occlusion-aware Unsupervised Learning of Depth from 4-D Light Fields [50.435129905215284]
We present an unsupervised learning-based depth estimation method for 4-D light field processing and analysis.
Based on the basic knowledge of the unique geometry structure of light field data, we explore the angular coherence among subsets of the light field views to estimate depth maps.
Our method can significantly shrink the performance gap between the previous unsupervised method and supervised ones, and produce depth maps with comparable accuracy to traditional methods with obviously reduced computational cost.
arXiv Detail & Related papers (2021-06-06T06:19:50Z) - DEALIO: Data-Efficient Adversarial Learning for Imitation from
Observation [57.358212277226315]
In imitation learning from observation IfO, a learning agent seeks to imitate a demonstrating agent using only observations of the demonstrated behavior without access to the control signals generated by the demonstrator.
Recent methods based on adversarial imitation learning have led to state-of-the-art performance on IfO problems, but they typically suffer from high sample complexity due to a reliance on data-inefficient, model-free reinforcement learning algorithms.
This issue makes them impractical to deploy in real-world settings, where gathering samples can incur high costs in terms of time, energy, and risk.
We propose a more data-efficient IfO algorithm
arXiv Detail & Related papers (2021-03-31T23:46:32Z) - A Simple but Tough-to-Beat Data Augmentation Approach for Natural
Language Understanding and Generation [53.8171136907856]
We introduce a set of simple yet effective data augmentation strategies dubbed cutoff.
cutoff relies on sampling consistency and thus adds little computational overhead.
cutoff consistently outperforms adversarial training and achieves state-of-the-art results on the IWSLT2014 German-English dataset.
arXiv Detail & Related papers (2020-09-29T07:08:35Z) - An Information Bottleneck Approach for Controlling Conciseness in
Rationale Extraction [84.49035467829819]
We show that it is possible to better manage this trade-off by optimizing a bound on the Information Bottleneck (IB) objective.
Our fully unsupervised approach jointly learns an explainer that predicts sparse binary masks over sentences, and an end-task predictor that considers only the extracted rationale.
arXiv Detail & Related papers (2020-05-01T23:26:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.