A Flexible Framework for Designing Trainable Priors with Adaptive
Smoothing and Game Encoding
- URL: http://arxiv.org/abs/2006.14859v2
- Date: Mon, 9 Nov 2020 10:00:10 GMT
- Title: A Flexible Framework for Designing Trainable Priors with Adaptive
Smoothing and Game Encoding
- Authors: Bruno Lecouat, Jean Ponce, Julien Mairal
- Abstract summary: We introduce a general framework for designing and training neural network layers whose forward passes can be interpreted as solving non-smooth convex optimization problems.
We focus on convex games, solved by local agents represented by the nodes of a graph and interacting through regularization functions.
This approach is appealing for solving imaging problems, as it allows the use of classical image priors within deep models that are trainable end to end.
- Score: 57.1077544780653
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a general framework for designing and training neural network
layers whose forward passes can be interpreted as solving non-smooth convex
optimization problems, and whose architectures are derived from an optimization
algorithm. We focus on convex games, solved by local agents represented by the
nodes of a graph and interacting through regularization functions. This
approach is appealing for solving imaging problems, as it allows the use of
classical image priors within deep models that are trainable end to end. The
priors used in this presentation include variants of total variation, Laplacian
regularization, bilateral filtering, sparse coding on learned dictionaries, and
non-local self similarities. Our models are fully interpretable as well as
parameter and data efficient. Our experiments demonstrate their effectiveness
on a large diversity of tasks ranging from image denoising and compressed
sensing for fMRI to dense stereo matching.
Related papers
- SequentialAttention++ for Block Sparsification: Differentiable Pruning
Meets Combinatorial Optimization [24.55623897747344]
Neural network pruning is a key technique towards engineering large yet scalable, interpretable, generalizable models.
We show how many existing differentiable pruning techniques can be understood as non regularization for group sparse optimization.
We propose SequentialAttention++, which advances state the art in large-scale neural network block-wise pruning tasks on the ImageNet and Criteo datasets.
arXiv Detail & Related papers (2024-02-27T21:42:18Z) - Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Spherical Image Inpainting with Frame Transformation and Data-driven
Prior Deep Networks [13.406134708071345]
In this work, we focus on the challenging task of spherical image inpainting with deep learning-based regularizer.
We employ a fast directional spherical Haar framelet transform and develop a novel optimization framework based on a sparsity assumption of the framelet transform.
We show that the proposed algorithms can greatly recover damaged spherical images and achieve the best performance over purely using deep learning denoiser and plug-and-play model.
arXiv Detail & Related papers (2022-09-29T07:51:27Z) - Learning Discriminative Shrinkage Deep Networks for Image Deconvolution [122.79108159874426]
We propose an effective non-blind deconvolution approach by learning discriminative shrinkage functions to implicitly model these terms.
Experimental results show that the proposed method performs favorably against the state-of-the-art ones in terms of efficiency and accuracy.
arXiv Detail & Related papers (2021-11-27T12:12:57Z) - Joint inference and input optimization in equilibrium networks [68.63726855991052]
deep equilibrium model is a class of models that foregoes traditional network depth and instead computes the output of a network by finding the fixed point of a single nonlinear layer.
We show that there is a natural synergy between these two settings.
We demonstrate this strategy on various tasks such as training generative models while optimizing over latent codes, training models for inverse problems like denoising and inpainting, adversarial training and gradient based meta-learning.
arXiv Detail & Related papers (2021-11-25T19:59:33Z) - Combinatorial Optimization for Panoptic Segmentation: An End-to-End
Trainable Approach [23.281726932718232]
We propose an end-to-end trainable architecture for simultaneous semantic and instance segmentation.
Our approach shows the utility of using optimization in tandem with deep learning in a challenging large scale real-world problem.
arXiv Detail & Related papers (2021-06-06T17:39:13Z) - Regularization via deep generative models: an analysis point of view [8.818465117061205]
This paper proposes a new way of regularizing an inverse problem in imaging (e.g., deblurring or inpainting) by means of a deep generative neural network.
In many cases our technique achieves a clear improvement of the performance and seems to be more robust.
arXiv Detail & Related papers (2021-01-21T15:04:57Z) - Blind Image Restoration with Flow Based Priors [19.190289348734215]
In a blind setting with unknown degradations, a good prior remains crucial.
We propose using normalizing flows to model the distribution of the target content and to use this as a prior in a maximum a posteriori (MAP) formulation.
To the best of our knowledge, this is the first work that explores normalizing flows as prior in image enhancement problems.
arXiv Detail & Related papers (2020-09-09T21:40:11Z) - The Power of Triply Complementary Priors for Image Compressive Sensing [89.14144796591685]
We propose a joint low-rank deep (LRD) image model, which contains a pair of complementaryly trip priors.
We then propose a novel hybrid plug-and-play framework based on the LRD model for image CS.
To make the optimization tractable, a simple yet effective algorithm is proposed to solve the proposed H-based image CS problem.
arXiv Detail & Related papers (2020-05-16T08:17:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.