Meta-Prior: Meta learning for Adaptive Inverse Problem Solvers
- URL: http://arxiv.org/abs/2311.18710v1
- Date: Thu, 30 Nov 2023 17:02:27 GMT
- Title: Meta-Prior: Meta learning for Adaptive Inverse Problem Solvers
- Authors: Matthieu Terris, Thomas Moreau
- Abstract summary: Real-world imaging challenges often lack ground truth data, rendering traditional supervised approaches ineffective.
Our method trains a meta-model on a diverse set of imaging tasks that allows the model to be efficiently fine-tuned for specific tasks.
In simple settings, this approach recovers the Bayes optimal estimator, illustrating the soundness of our approach.
- Score: 9.364509804053275
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks have become a foundational tool for addressing imaging
inverse problems. They are typically trained for a specific task, with a
supervised loss to learn a mapping from the observations to the image to
recover. However, real-world imaging challenges often lack ground truth data,
rendering traditional supervised approaches ineffective. Moreover, for each new
imaging task, a new model needs to be trained from scratch, wasting time and
resources. To overcome these limitations, we introduce a novel approach based
on meta-learning. Our method trains a meta-model on a diverse set of imaging
tasks that allows the model to be efficiently fine-tuned for specific tasks
with few fine-tuning steps. We show that the proposed method extends to the
unsupervised setting, where no ground truth data is available. In its bilevel
formulation, the outer level uses a supervised loss, that evaluates how well
the fine-tuned model performs, while the inner loss can be either supervised or
unsupervised, relying only on the measurement operator. This allows the
meta-model to leverage a few ground truth samples for each task while being
able to generalize to new imaging tasks. We show that in simple settings, this
approach recovers the Bayes optimal estimator, illustrating the soundness of
our approach. We also demonstrate our method's effectiveness on various tasks,
including image processing and magnetic resonance imaging.
Related papers
- Data Adaptive Traceback for Vision-Language Foundation Models in Image Classification [34.37262622415682]
We propose a new adaptation framework called Data Adaptive Traceback.
Specifically, we utilize a zero-shot-based method to extract the most downstream task-related subset of the pre-training data.
We adopt a pseudo-label-based semi-supervised technique to reuse the pre-training images and a vision-language contrastive learning method to address the confirmation bias issue in semi-supervised learning.
arXiv Detail & Related papers (2024-07-11T18:01:58Z) - Unsupervised Meta-Learning via In-Context Learning [3.4165401459803335]
We propose a novel approach to unsupervised meta-learning that leverages the generalization abilities of in-supervised learning.
Our method reframes meta-learning as a sequence modeling problem, enabling the transformer encoder to learn task context from support images.
arXiv Detail & Related papers (2024-05-25T08:29:46Z) - One-Shot Image Restoration [0.0]
Experimental results demonstrate the applicability, robustness and computational efficiency of the proposed approach for supervised image deblurring and super-resolution.
Our results showcase significant improvement of learning models' sample efficiency, generalization and time complexity.
arXiv Detail & Related papers (2024-04-26T14:03:23Z) - Data-CUBE: Data Curriculum for Instruction-based Sentence Representation
Learning [85.66907881270785]
We propose a data curriculum method, namely Data-CUBE, that arranges the orders of all the multi-task data for training.
In the task level, we aim to find the optimal task order to minimize the total cross-task interference risk.
In the instance level, we measure the difficulty of all instances per task, then divide them into the easy-to-difficult mini-batches for training.
arXiv Detail & Related papers (2024-01-07T18:12:20Z) - Unsupervised Deep Learning-based Pansharpening with Jointly-Enhanced
Spectral and Spatial Fidelity [4.425982186154401]
We propose a new deep learning-based pansharpening model that fully exploits the potential of this approach.
The proposed model features a novel loss function that jointly promotes the spectral and spatial quality of the pansharpened data.
Experiments on a large variety of test images, performed in challenging scenarios, demonstrate that the proposed method compares favorably with the state of the art.
arXiv Detail & Related papers (2023-07-26T17:25:28Z) - UMat: Uncertainty-Aware Single Image High Resolution Material Capture [2.416160525187799]
We propose a learning-based method to recover normals, specularity, and roughness from a single diffuse image of a material.
Our method is the first one to deal with the problem of modeling uncertainty in material digitization.
arXiv Detail & Related papers (2023-05-25T17:59:04Z) - Hard Patches Mining for Masked Image Modeling [52.46714618641274]
Masked image modeling (MIM) has attracted much research attention due to its promising potential for learning scalable visual representations.
We propose Hard Patches Mining (HPM), a brand-new framework for MIM pre-training.
arXiv Detail & Related papers (2023-04-12T15:38:23Z) - Predictive Experience Replay for Continual Visual Control and
Forecasting [62.06183102362871]
We present a new continual learning approach for visual dynamics modeling and explore its efficacy in visual control and forecasting.
We first propose the mixture world model that learns task-specific dynamics priors with a mixture of Gaussians, and then introduce a new training strategy to overcome catastrophic forgetting.
Our model remarkably outperforms the naive combinations of existing continual learning and visual RL algorithms on DeepMind Control and Meta-World benchmarks with continual visual control tasks.
arXiv Detail & Related papers (2023-03-12T05:08:03Z) - DETA: Denoised Task Adaptation for Few-Shot Learning [135.96805271128645]
Test-time task adaptation in few-shot learning aims to adapt a pre-trained task-agnostic model for capturing taskspecific knowledge.
With only a handful of samples available, the adverse effect of either the image noise (a.k.a. X-noise) or the label noise (a.k.a. Y-noise) from support samples can be severely amplified.
We propose DEnoised Task Adaptation (DETA), a first, unified image- and label-denoising framework to existing task adaptation approaches.
arXiv Detail & Related papers (2023-03-11T05:23:20Z) - PatchNR: Learning from Small Data by Patch Normalizing Flow
Regularization [57.37911115888587]
We introduce a regularizer for the variational modeling of inverse problems in imaging based on normalizing flows.
Our regularizer, called patchNR, involves a normalizing flow learned on patches of very few images.
arXiv Detail & Related papers (2022-05-24T12:14:26Z) - Pose Guided Person Image Generation with Hidden p-Norm Regression [113.41144529452663]
We propose a novel approach to solve the pose guided person image generation task.
Our method estimates a pose-invariant feature matrix for each identity, and uses it to predict the target appearance conditioned on the target pose.
Our method yields competitive performance in all the aforementioned variant scenarios.
arXiv Detail & Related papers (2021-02-19T17:03:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.