Online Adaptive Disparity Estimation for Dynamic Scenes in Structured
Light Systems
- URL: http://arxiv.org/abs/2310.08934v1
- Date: Fri, 13 Oct 2023 08:00:33 GMT
- Title: Online Adaptive Disparity Estimation for Dynamic Scenes in Structured
Light Systems
- Authors: Rukun Qiao, Hiroshi Kawasaki, Hongbin Zha
- Abstract summary: Self-supervised online adaptation has been proposed as a solution to bridge this performance gap.
We propose an unsupervised loss function based on long sequential inputs. It ensures better gradient directions and faster convergence.
Our proposed framework significantly improves the online adaptation speed and achieves superior performance on unseen data.
- Score: 17.53719804060679
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, deep neural networks have shown remarkable progress in dense
disparity estimation from dynamic scenes in monocular structured light systems.
However, their performance significantly drops when applied in unseen
environments. To address this issue, self-supervised online adaptation has been
proposed as a solution to bridge this performance gap. Unlike traditional
fine-tuning processes, online adaptation performs test-time optimization to
adapt networks to new domains. Therefore, achieving fast convergence during the
adaptation process is critical for attaining satisfactory accuracy. In this
paper, we propose an unsupervised loss function based on long sequential
inputs. It ensures better gradient directions and faster convergence. Our loss
function is designed using a multi-frame pattern flow, which comprises a set of
sparse trajectories of the projected pattern along the sequence. We estimate
the sparse pseudo ground truth with a confidence mask using a filter-based
method, which guides the online adaptation process. Our proposed framework
significantly improves the online adaptation speed and achieves superior
performance on unseen data.
Related papers
- Adaptive Anomaly Detection in Network Flows with Low-Rank Tensor Decompositions and Deep Unrolling [9.20186865054847]
Anomaly detection (AD) is increasingly recognized as a key component for ensuring the resilience of future communication systems.
This work considers AD in network flows using incomplete measurements.
We propose a novel block-successive convex approximation algorithm based on a regularized model-fitting objective.
Inspired by Bayesian approaches, we extend the model architecture to perform online adaptation to per-flow and per-time-step statistics.
arXiv Detail & Related papers (2024-09-17T19:59:57Z) - Improving Gradient-Trend Identification: Fast-Adaptive Moment Estimation
with Finance-Inspired Triple Exponential Moving Average [2.480023305418]
We introduce a novel called fast-adaptive moment estimation (FAME)
Inspired by the triple exponential moving average (TEMA) used in the financial domain, FAME improves the precision of identifying gradient trends.
Because of the introduction of TEMA into the optimization process, FAME can identify trends with higher accuracy and fewer lag issues.
arXiv Detail & Related papers (2023-06-02T10:29:33Z) - Combining Explicit and Implicit Regularization for Efficient Learning in
Deep Networks [3.04585143845864]
In deep linear networks, gradient descent implicitly regularizes toward low-rank solutions on matrix completion/factorization tasks.
We propose an explicit penalty to mirror this implicit bias which only takes effect with certain adaptive gradient generalizations.
This combination can enable a single-layer network to achieve low-rank approximations with degenerate error comparable to deep linear networks.
arXiv Detail & Related papers (2023-06-01T04:47:17Z) - Hyper-Learning for Gradient-Based Batch Size Adaptation [2.944323057176686]
Scheduling the batch size to increase is an effective strategy to control noise when training deep neural networks.
We introduce Arbiter as a new hyper-optimization algorithm to perform batch size adaptations for learnable schedulings.
We demonstrate Arbiter's effectiveness in several illustrative experiments.
arXiv Detail & Related papers (2022-05-17T11:01:14Z) - Learning Fast and Slow for Online Time Series Forecasting [76.50127663309604]
Fast and Slow learning Networks (FSNet) is a holistic framework for online time-series forecasting.
FSNet balances fast adaptation to recent changes and retrieving similar old knowledge.
Our code will be made publicly available.
arXiv Detail & Related papers (2022-02-23T18:23:07Z) - Low-light Image Enhancement by Retinex Based Algorithm Unrolling and
Adjustment [50.13230641857892]
We propose a new deep learning framework for the low-light image enhancement (LIE) problem.
The proposed framework contains a decomposition network inspired by algorithm unrolling, and adjustment networks considering both global brightness and local brightness sensitivity.
Experiments on a series of typical LIE datasets demonstrated the effectiveness of the proposed method, both quantitatively and visually, as compared with existing methods.
arXiv Detail & Related papers (2022-02-12T03:59:38Z) - Adaptivity and Non-stationarity: Problem-dependent Dynamic Regret for Online Convex Optimization [70.4342220499858]
We introduce novel online algorithms that can exploit smoothness and replace the dependence on $T$ in dynamic regret with problem-dependent quantities.
Our results are adaptive to the intrinsic difficulty of the problem, since the bounds are tighter than existing results for easy problems and safeguard the same rate in the worst case.
arXiv Detail & Related papers (2021-12-29T02:42:59Z) - AuxAdapt: Stable and Efficient Test-Time Adaptation for Temporally
Consistent Video Semantic Segmentation [81.87943324048756]
In video segmentation, generating temporally consistent results across frames is as important as achieving frame-wise accuracy.
Existing methods rely on optical flow regularization or fine-tuning with test data to attain temporal consistency.
This paper presents an efficient, intuitive, and unsupervised online adaptation method, AuxAdapt, for improving the temporal consistency of most neural network models.
arXiv Detail & Related papers (2021-10-24T07:07:41Z) - Non-Gradient Manifold Neural Network [79.44066256794187]
Deep neural network (DNN) generally takes thousands of iterations to optimize via gradient descent.
We propose a novel manifold neural network based on non-gradient optimization.
arXiv Detail & Related papers (2021-06-15T06:39:13Z) - A Differential Game Theoretic Neural Optimizer for Training Residual
Networks [29.82841891919951]
We propose a generalized Differential Dynamic Programming (DDP) neural architecture that accepts both residual connections and convolution layers.
The resulting optimal control representation admits a gameoretic perspective, in which training residual networks can be interpreted as cooperative trajectory optimization on state-augmented systems.
arXiv Detail & Related papers (2020-07-17T10:19:17Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.