ScopeFlow: Dynamic Scene Scoping for Optical Flow
- URL: http://arxiv.org/abs/2002.10770v2
- Date: Mon, 4 May 2020 08:19:29 GMT
- Title: ScopeFlow: Dynamic Scene Scoping for Optical Flow
- Authors: Aviram Bar-Haim, Lior Wolf
- Abstract summary: We propose to modify the common training protocols of optical flow.
The improvement is based on observing the bias in sampling challenging data.
We find that both regularization and augmentation should decrease during the training protocol.
- Score: 94.42139459221784
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose to modify the common training protocols of optical flow, leading
to sizable accuracy improvements without adding to the computational complexity
of the training process. The improvement is based on observing the bias in
sampling challenging data that exists in the current training protocol, and
improving the sampling process. In addition, we find that both regularization
and augmentation should decrease during the training protocol.
Using an existing low parameters architecture, the method is ranked first on
the MPI Sintel benchmark among all other methods, improving the best two frames
method accuracy by more than 10%. The method also surpasses all similar
architecture variants by more than 12% and 19.7% on the KITTI benchmarks,
achieving the lowest Average End-Point Error on KITTI2012 among two-frame
methods, without using extra datasets.
Related papers
- Learning Large-scale Neural Fields via Context Pruned Meta-Learning [60.93679437452872]
We introduce an efficient optimization-based meta-learning technique for large-scale neural field training.
We show how gradient re-scaling at meta-test time allows the learning of extremely high-quality neural fields.
Our framework is model-agnostic, intuitive, straightforward to implement, and shows significant reconstruction improvements for a wide range of signals.
arXiv Detail & Related papers (2023-02-01T17:32:16Z) - Dynamic Batch Adaptation [2.861848675707603]
Current deep learning adaptive methods adjust the step magnitude of parameter updates by altering the effective learning rate used by each parameter.
Motivated by the known inverse relation between batch size and learning rate on update step magnitudes, we introduce a novel training procedure that dynamically decides the dimension and the composition of the current update step.
arXiv Detail & Related papers (2022-08-01T12:52:09Z) - RAFT-MSF: Self-Supervised Monocular Scene Flow using Recurrent Optimizer [21.125470798719967]
We introduce a self-supervised monocular scene flow method that substantially improves the accuracy over the previous approaches.
Based on RAFT, a state-of-the-art optical flow model, we design a new decoder to iteratively update 3D motion fields and disparity maps simultaneously.
Our method achieves state-of-the-art accuracy among all self-supervised monocular scene flow methods, improving accuracy by 34.2%.
arXiv Detail & Related papers (2022-05-03T15:43:57Z) - Semantic Perturbations with Normalizing Flows for Improved
Generalization [62.998818375912506]
We show that perturbations in the latent space can be used to define fully unsupervised data augmentations.
We find that our latent adversarial perturbations adaptive to the classifier throughout its training are most effective.
arXiv Detail & Related papers (2021-08-18T03:20:00Z) - SIMPLE: SIngle-network with Mimicking and Point Learning for Bottom-up
Human Pose Estimation [81.03485688525133]
We propose a novel multi-person pose estimation framework, SIngle-network with Mimicking and Point Learning for Bottom-up Human Pose Estimation (SIMPLE)
Specifically, in the training process, we enable SIMPLE to mimic the pose knowledge from the high-performance top-down pipeline.
Besides, SIMPLE formulates human detection and pose estimation as a unified point learning framework to complement each other in single-network.
arXiv Detail & Related papers (2021-04-06T13:12:51Z) - Improving Calibration for Long-Tailed Recognition [68.32848696795519]
We propose two methods to improve calibration and performance in such scenarios.
For dataset bias due to different samplers, we propose shifted batch normalization.
Our proposed methods set new records on multiple popular long-tailed recognition benchmark datasets.
arXiv Detail & Related papers (2021-04-01T13:55:21Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z) - Passive Batch Injection Training Technique: Boosting Network Performance
by Injecting Mini-Batches from a different Data Distribution [39.8046809855363]
This work presents a novel training technique for deep neural networks that makes use of additional data from a distribution that is different from that of the original input data.
To the best of our knowledge, this is the first work that makes use of different data distribution to aid the training of convolutional neural networks (CNNs)
arXiv Detail & Related papers (2020-06-08T08:17:32Z) - Generalized Reinforcement Meta Learning for Few-Shot Optimization [3.7675996866306845]
We present a generic and flexible Reinforcement Learning (RL) based meta-learning framework for the problem of few-shot learning.
Our framework could be easily extended to do network architecture search.
arXiv Detail & Related papers (2020-05-04T03:21:05Z) - kDecay: Just adding k-decay items on Learning-Rate Schedule to improve
Neural Networks [5.541389959719384]
k-decay is effectively improves the performance of commonly used and easy LR schedule.
We evaluate the k-decay method on CIFAR And ImageNet datasets with different neural networks.
The accuracy has been improved by 1.08% on the CIFAR-10 dataset and by 2.07% on the CIFAR-100 dataset.
arXiv Detail & Related papers (2020-04-13T12:58:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.