MetaComp: Learning to Adapt for Online Depth Completion
- URL: http://arxiv.org/abs/2207.10623v1
- Date: Thu, 21 Jul 2022 17:30:37 GMT
- Title: MetaComp: Learning to Adapt for Online Depth Completion
- Authors: Yang Chen, Shanshan Zhao, Wei Ji, Mingming Gong, Liping Xie
- Abstract summary: We propose MetaComp, which simulates adaptation policies during the training phase and adapts the model to new environments in a self-supervised manner in testing.
Considering that the input is multi-modal data, it would be challenging to adapt a model to variations in two modalities simultaneously.
Experimental results and comprehensive ablations show that our MetaComp is capable of adapting to the depth completion in a new environment effectively and robust to changes in different modalities.
- Score: 47.2074274233496
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Relying on deep supervised or self-supervised learning, previous methods for
depth completion from paired single image and sparse depth data have achieved
impressive performance in recent years. However, facing a new environment where
the test data occurs online and differs from the training data in the RGB image
content and depth sparsity, the trained model might suffer severe performance
drop. To encourage the trained model to work well in such conditions, we expect
it to be capable of adapting to the new environment continuously and
effectively. To achieve this, we propose MetaComp. It utilizes the
meta-learning technique to simulate adaptation policies during the training
phase, and then adapts the model to new environments in a self-supervised
manner in testing. Considering that the input is multi-modal data, it would be
challenging to adapt a model to variations in two modalities simultaneously,
due to significant differences in structure and form of the two modal data.
Therefore, we further propose to disentangle the adaptation procedure in the
basic meta-learning training into two steps, the first one focusing on the
depth sparsity while the second attending to the image content. During testing,
we take the same strategy to adapt the model online to new multi-modal data.
Experimental results and comprehensive ablations show that our MetaComp is
capable of adapting to the depth completion in a new environment effectively
and robust to changes in different modalities.
Related papers
- Architecture, Dataset and Model-Scale Agnostic Data-free Meta-Learning [119.70303730341938]
We propose ePisode cUrriculum inveRsion (ECI) during data-free meta training and invErsion calibRation following inner loop (ICFIL) during meta testing.
ECI adaptively increases the difficulty level of pseudo episodes according to the real-time feedback of the meta model.
We formulate the optimization process of meta training with ECI as an adversarial form in an end-to-end manner.
arXiv Detail & Related papers (2023-03-20T15:10:41Z) - Learn to Adapt for Monocular Depth Estimation [17.887575611570394]
We propose an adversarial depth estimation task and train the model in the pipeline of meta-learning.
Our method adapts well to new datasets after few training steps during the test procedure.
arXiv Detail & Related papers (2022-03-26T06:49:22Z) - Friendly Training: Neural Networks Can Adapt Data To Make Learning
Easier [23.886422706697882]
We propose a novel training procedure named Friendly Training.
We show that Friendly Training yields improvements with respect to informed data sub-selection and random selection.
Results suggest that adapting the input data is a feasible way to stabilize learning and improve the skills generalization of the network.
arXiv Detail & Related papers (2021-06-21T10:50:34Z) - Learning to Continuously Optimize Wireless Resource in a Dynamic
Environment: A Bilevel Optimization Perspective [52.497514255040514]
This work develops a new approach that enables data-driven methods to continuously learn and optimize resource allocation strategies in a dynamic environment.
We propose to build the notion of continual learning into wireless system design, so that the learning model can incrementally adapt to the new episodes.
Our design is based on a novel bilevel optimization formulation which ensures certain fairness" across different data samples.
arXiv Detail & Related papers (2021-05-03T07:23:39Z) - MT3: Meta Test-Time Training for Self-Supervised Test-Time Adaption [69.76837484008033]
An unresolved problem in Deep Learning is the ability of neural networks to cope with domain shifts during test-time.
We combine meta-learning, self-supervision and test-time training to learn to adapt to unseen test distributions.
Our approach significantly improves the state-of-the-art results on the CIFAR-10-Corrupted image classification benchmark.
arXiv Detail & Related papers (2021-03-30T09:33:38Z) - Learning Adaptable Policy via Meta-Adversarial Inverse Reinforcement
Learning for Decision-making Tasks [2.1485350418225244]
We build an adaptable imitation learning model based on the integration of Meta-learning and Adversarial Inverse Reinforcement Learning.
We exploit the adversarial learning and inverse reinforcement learning mechanisms to learn policies and reward functions simultaneously from available training tasks.
arXiv Detail & Related papers (2021-03-23T17:16:38Z) - Few Is Enough: Task-Augmented Active Meta-Learning for Brain Cell
Classification [8.998976678920236]
We propose a tAsk-auGmented actIve meta-LEarning (AGILE) method to efficiently adapt Deep Neural Networks to new tasks.
AGILE combines a meta-learning algorithm with a novel task augmentation technique which we use to generate an initial adaptive model.
We show that the proposed task-augmented meta-learning framework can learn to classify new cell types after a single gradient step.
arXiv Detail & Related papers (2020-07-09T18:03:12Z) - Hybrid Generative-Retrieval Transformers for Dialogue Domain Adaptation [77.62366712130196]
We present the winning entry at the fast domain adaptation task of DSTC8, a hybrid generative-retrieval model based on GPT-2 fine-tuned to the multi-domain MetaLWOz dataset.
Our model uses retrieval logic as a fallback, being SoTA on MetaLWOz in human evaluation (>4% improvement over the 2nd place system) and attaining competitive generalization performance in adaptation to the unseen MultiWOZ dataset.
arXiv Detail & Related papers (2020-03-03T18:07:42Z) - Incremental Meta-Learning via Indirect Discriminant Alignment [118.61152684795178]
We develop a notion of incremental learning during the meta-training phase of meta-learning.
Our approach performs favorably at test time as compared to training a model with the full meta-training set.
arXiv Detail & Related papers (2020-02-11T01:39:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.