Complementing Brightness Constancy with Deep Networks for Optical Flow
Prediction
- URL: http://arxiv.org/abs/2207.03790v2
- Date: Tue, 12 Jul 2022 12:23:40 GMT
- Title: Complementing Brightness Constancy with Deep Networks for Optical Flow
Prediction
- Authors: Vincent Le Guen, Cl\'ement Rambour, Nicolas Thome
- Abstract summary: COMBO is a deep network that exploits the brightness constancy (BC) model used in traditional methods.
We derive a joint training scheme for learning the different components of the decomposition ensuring an optimal cooperation.
Experiments show that COMBO can improve performances over state-of-the-art supervised networks.
- Score: 30.10864927536864
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: State-of-the-art methods for optical flow estimation rely on deep learning,
which require complex sequential training schemes to reach optimal performances
on real-world data. In this work, we introduce the COMBO deep network that
explicitly exploits the brightness constancy (BC) model used in traditional
methods. Since BC is an approximate physical model violated in several
situations, we propose to train a physically-constrained network complemented
with a data-driven network. We introduce a unique and meaningful flow
decomposition between the physical prior and the data-driven complement,
including an uncertainty quantification of the BC model. We derive a joint
training scheme for learning the different components of the decomposition
ensuring an optimal cooperation, in a supervised but also in a semi-supervised
context. Experiments show that COMBO can improve performances over
state-of-the-art supervised networks, e.g. RAFT, reaching state-of-the-art
results on several benchmarks. We highlight how COMBO can leverage the BC model
and adapt to its limitations. Finally, we show that our semi-supervised method
can significantly simplify the training procedure.
Related papers
- Adaptive Anomaly Detection in Network Flows with Low-Rank Tensor Decompositions and Deep Unrolling [9.20186865054847]
Anomaly detection (AD) is increasingly recognized as a key component for ensuring the resilience of future communication systems.
This work considers AD in network flows using incomplete measurements.
We propose a novel block-successive convex approximation algorithm based on a regularized model-fitting objective.
Inspired by Bayesian approaches, we extend the model architecture to perform online adaptation to per-flow and per-time-step statistics.
arXiv Detail & Related papers (2024-09-17T19:59:57Z) - Towards a Better Theoretical Understanding of Independent Subnetwork Training [56.24689348875711]
We take a closer theoretical look at Independent Subnetwork Training (IST)
IST is a recently proposed and highly effective technique for solving the aforementioned problems.
We identify fundamental differences between IST and alternative approaches, such as distributed methods with compressed communication.
arXiv Detail & Related papers (2023-06-28T18:14:22Z) - CoopInit: Initializing Generative Adversarial Networks via Cooperative
Learning [50.90384817689249]
CoopInit is a cooperative learning-based strategy that can quickly learn a good starting point for GANs.
We demonstrate the effectiveness of the proposed approach on image generation and one-sided unpaired image-to-image translation tasks.
arXiv Detail & Related papers (2023-03-21T07:49:32Z) - Unifying Synergies between Self-supervised Learning and Dynamic
Computation [53.66628188936682]
We present a novel perspective on the interplay between SSL and DC paradigms.
We show that it is feasible to simultaneously learn a dense and gated sub-network from scratch in a SSL setting.
The co-evolution during pre-training of both dense and gated encoder offers a good accuracy-efficiency trade-off.
arXiv Detail & Related papers (2023-01-22T17:12:58Z) - Semi-Supervised Learning of Optical Flow by Flow Supervisor [16.406213579356795]
We propose a practical fine tuning method to adapt a pretrained model to a target dataset without ground truth flows.
This design is aimed at stable convergence and better accuracy over conventional self-supervision methods.
We achieve meaningful improvements over state-of-the-art optical flow models on Sintel and KITTI benchmarks.
arXiv Detail & Related papers (2022-07-21T06:11:52Z) - The Principles of Deep Learning Theory [19.33681537640272]
This book develops an effective theory approach to understanding deep neural networks of practical relevance.
We explain how these effectively-deep networks learn nontrivial representations from training.
We show that the depth-to-width ratio governs the effective model complexity of the ensemble of trained networks.
arXiv Detail & Related papers (2021-06-18T15:00:00Z) - A Convergence Theory Towards Practical Over-parameterized Deep Neural
Networks [56.084798078072396]
We take a step towards closing the gap between theory and practice by significantly improving the known theoretical bounds on both the network width and the convergence time.
We show that convergence to a global minimum is guaranteed for networks with quadratic widths in the sample size and linear in their depth at a time logarithmic in both.
Our analysis and convergence bounds are derived via the construction of a surrogate network with fixed activation patterns that can be transformed at any time to an equivalent ReLU network of a reasonable size.
arXiv Detail & Related papers (2021-01-12T00:40:45Z) - Deep Multi-Task Learning for Cooperative NOMA: System Design and
Principles [52.79089414630366]
We develop a novel deep cooperative NOMA scheme, drawing upon the recent advances in deep learning (DL)
We develop a novel hybrid-cascaded deep neural network (DNN) architecture such that the entire system can be optimized in a holistic manner.
arXiv Detail & Related papers (2020-07-27T12:38:37Z) - A Differential Game Theoretic Neural Optimizer for Training Residual
Networks [29.82841891919951]
We propose a generalized Differential Dynamic Programming (DDP) neural architecture that accepts both residual connections and convolution layers.
The resulting optimal control representation admits a gameoretic perspective, in which training residual networks can be interpreted as cooperative trajectory optimization on state-augmented systems.
arXiv Detail & Related papers (2020-07-17T10:19:17Z) - An Ode to an ODE [78.97367880223254]
We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the group O(d)
This nested system of two flows provides stability and effectiveness of training and provably solves the gradient vanishing-explosion problem.
arXiv Detail & Related papers (2020-06-19T22:05:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.