CNN-based Euler's Elastica Inpainting with Deep Energy and Deep Image
Prior
- URL: http://arxiv.org/abs/2207.07921v1
- Date: Sat, 16 Jul 2022 12:11:28 GMT
- Title: CNN-based Euler's Elastica Inpainting with Deep Energy and Deep Image
Prior
- Authors: Karl Schrader, Tobias Alt, Joachim Weickert, Michael Ertel
- Abstract summary: We design the first neural algorithm that simulates inpainting with Euler's Elastica.
We use the deep energy concept which employs the variational energy as neural network loss.
Our results are on par with state-of-the-art algorithms on elastica-based shape completion.
- Score: 10.848775419008442
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Euler's elastica constitute an appealing variational image inpainting model.
It minimises an energy that involves the total variation as well as the level
line curvature. These components are transparent and make it attractive for
shape completion tasks. However, its gradient flow is a singular, anisotropic,
and nonlinear PDE of fourth order, which is numerically challenging: It is
difficult to find efficient algorithms that offer sharp edges and good rotation
invariance. As a remedy, we design the first neural algorithm that simulates
inpainting with Euler's Elastica. We use the deep energy concept which employs
the variational energy as neural network loss. Furthermore, we pair it with a
deep image prior where the network architecture itself acts as a prior. This
yields better inpaintings by steering the optimisation trajectory closer to the
desired solution. Our results are qualitatively on par with state-of-the-art
algorithms on elastica-based shape completion. They combine good rotation
invariance with sharp edges. Moreover, we benefit from the high efficiency and
effortless parallelisation within a neural framework. Our neural elastica
approach only requires 3x3 central difference stencils. It is thus much simpler
than other well-performing algorithms for elastica inpainting. Last but not
least, it is unsupervised as it requires no ground truth training data.
Related papers
- LinSATNet: The Positive Linear Satisfiability Neural Networks [116.65291739666303]
This paper studies how to introduce the popular positive linear satisfiability to neural networks.
We propose the first differentiable satisfiability layer based on an extension of the classic Sinkhorn algorithm for jointly encoding multiple sets of marginal distributions.
arXiv Detail & Related papers (2024-07-18T22:05:21Z) - Euler's Elastica Based Cartoon-Smooth-Texture Image Decomposition [4.829677240798159]
We propose a novel model for decomposing grayscale images into three distinct components.
The structural part represents strong boundaries and regions with strong light-to-dark transitions; the smooth part, capturing soft shadows and shadows; and the oscillatory, characterizing textures and noise.
arXiv Detail & Related papers (2024-07-03T03:42:33Z) - Efficient and Effective Implicit Dynamic Graph Neural Network [42.49148111696576]
We present Implicit Dynamic Graph Neural Network (IDGNN) a novel implicit neural network for dynamic graphs.
A key characteristic of IDGNN is that it demonstrably is well-posed, i.e., it is theoretically guaranteed to have a fixed-point representation.
arXiv Detail & Related papers (2024-06-25T19:07:21Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - On Model Compression for Neural Networks: Framework, Algorithm, and Convergence Guarantee [21.818773423324235]
This paper focuses on two model compression techniques: low-rank approximation and weight approximation.
In this paper, a holistic framework is proposed for model compression from a novel perspective of non optimization.
arXiv Detail & Related papers (2023-03-13T02:14:42Z) - Visual Explanations from Deep Networks via Riemann-Stieltjes Integrated
Gradient-based Localization [0.24596929878045565]
We introduce a new technique to produce visual explanations for the predictions of a CNN.
Our method can be applied to any layer of the network, and like Integrated Gradients it is not affected by the problem of vanishing gradients.
Compared to Grad-CAM, heatmaps produced by our algorithm are better focused in the areas of interest, and their numerical computation is more stable.
arXiv Detail & Related papers (2022-05-22T18:30:38Z) - Joint inference and input optimization in equilibrium networks [68.63726855991052]
deep equilibrium model is a class of models that foregoes traditional network depth and instead computes the output of a network by finding the fixed point of a single nonlinear layer.
We show that there is a natural synergy between these two settings.
We demonstrate this strategy on various tasks such as training generative models while optimizing over latent codes, training models for inverse problems like denoising and inpainting, adversarial training and gradient based meta-learning.
arXiv Detail & Related papers (2021-11-25T19:59:33Z) - Neural Knitworks: Patched Neural Implicit Representation Networks [1.0470286407954037]
We propose Knitwork, an architecture for neural implicit representation learning of natural images that achieves image synthesis.
To the best of our knowledge, this is the first implementation of a coordinate-based patch tailored for synthesis tasks such as image inpainting, super-resolution, and denoising.
The results show that modeling natural images using patches, rather than pixels, produces results of higher fidelity.
arXiv Detail & Related papers (2021-09-29T13:10:46Z) - A Flexible Framework for Designing Trainable Priors with Adaptive
Smoothing and Game Encoding [57.1077544780653]
We introduce a general framework for designing and training neural network layers whose forward passes can be interpreted as solving non-smooth convex optimization problems.
We focus on convex games, solved by local agents represented by the nodes of a graph and interacting through regularization functions.
This approach is appealing for solving imaging problems, as it allows the use of classical image priors within deep models that are trainable end to end.
arXiv Detail & Related papers (2020-06-26T08:34:54Z) - Geometrically Principled Connections in Graph Neural Networks [66.51286736506658]
We argue geometry should remain the primary driving force behind innovation in the emerging field of geometric deep learning.
We relate graph neural networks to widely successful computer graphics and data approximation models: radial basis functions (RBFs)
We introduce affine skip connections, a novel building block formed by combining a fully connected layer with any graph convolution operator.
arXiv Detail & Related papers (2020-04-06T13:25:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.