Component Tree Loss Function: Definition and Optimization
- URL: http://arxiv.org/abs/2101.08063v1
- Date: Wed, 20 Jan 2021 10:55:37 GMT
- Title: Component Tree Loss Function: Definition and Optimization
- Authors: Benjamin Perret (LIGM), Jean Cousty (LIGM)
- Abstract summary: We show how the altitudes associated to the nodes of such hierarchical image representations can be differentiated with respect to the image pixel values.
This feature is used to design a generic loss function that can select or discard image maxima based on various attributes such as extinction values.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this article, we propose a method to design loss functions based on
component trees which can be optimized by gradient descent algorithms and which
are therefore usable in conjunction with recent machine learning approaches
such as neural networks. We show how the altitudes associated to the nodes of
such hierarchical image representations can be differentiated with respect to
the image pixel values. This feature is used to design a generic loss function
that can select or discard image maxima based on various attributes such as
extinction values. The possibilities of the proposed method are demonstrated on
simulated and real image filtering.
Related papers
- Restoring Images in Adverse Weather Conditions via Histogram Transformer [75.74328579778049]
We propose an efficient Histogram Transformer (Histoformer) for restoring images affected by adverse weather.
It is powered by a mechanism dubbed histogram self-attention, which sorts and segments spatial features into intensity-based bins.
To boost histogram self-attention, we present a dynamic-range convolution enabling conventional convolution to conduct operation over similar pixels.
arXiv Detail & Related papers (2024-07-14T11:59:22Z) - Strong and Controllable Blind Image Decomposition [57.682079186903195]
Blind image decomposition aims to decompose all components present in an image.
Users might want to retain certain degradations, such as watermarks, for copyright protection.
We design an architecture named controllable blind image decomposition network.
arXiv Detail & Related papers (2024-03-15T17:59:44Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - Discriminative feature encoding for intrinsic image decomposition [16.77439691640257]
Intrinsic image decomposition is an important and long-standing computer vision problem.
This work takes advantage of deep learning, and shows that it can solve this challenging computer vision problem with high efficiency.
arXiv Detail & Related papers (2022-09-25T05:51:49Z) - Dissecting the impact of different loss functions with gradient surgery [7.001832294837659]
Pair-wise loss is an approach to metric learning that learns a semantic embedding by optimizing a loss function.
Here we decompose the gradient of these loss functions into components that relate to how they push the relative feature positions of the anchor-positive and anchor-negative pairs.
arXiv Detail & Related papers (2022-01-27T03:55:48Z) - Spatially-Adaptive Image Restoration using Distortion-Guided Networks [51.89245800461537]
We present a learning-based solution for restoring images suffering from spatially-varying degradations.
We propose SPAIR, a network design that harnesses distortion-localization information and dynamically adjusts to difficult regions in the image.
arXiv Detail & Related papers (2021-08-19T11:02:25Z) - Convolutional Neural Networks from Image Markers [62.997667081978825]
Feature Learning from Image Markers (FLIM) was recently proposed to estimate convolutional filters, with no backpropagation, from strokes drawn by a user on very few images.
This paper extends FLIM for fully connected layers and demonstrates it on different image classification problems.
The results show that FLIM-based convolutional neural networks can outperform the same architecture trained from scratch by backpropagation.
arXiv Detail & Related papers (2020-12-15T22:58:23Z) - Image Inpainting with Learnable Feature Imputation [8.293345261434943]
A regular convolution layer applying a filter in the same way over known and unknown areas causes visual artifacts in the inpainted image.
We propose (layer-wise) feature imputation of the missing input values to a convolution.
We present comparisons on CelebA-HQ and Places2 to current state-of-the-art to validate our model.
arXiv Detail & Related papers (2020-11-02T16:05:32Z) - A Loss Function for Generative Neural Networks Based on Watson's
Perceptual Model [14.1081872409308]
To train Variational Autoencoders (VAEs) to generate realistic imagery requires a loss function that reflects human perception of image similarity.
We propose such a loss function based on Watson's perceptual model, which computes a weighted distance in frequency space and accounts for luminance and contrast masking.
In experiments, VAEs trained with the new loss function generated realistic, high-quality image samples.
arXiv Detail & Related papers (2020-06-26T15:36:11Z) - Deeply Learned Spectral Total Variation Decomposition [8.679020335206753]
We present a neural network approximation of a non-linear spectral decomposition.
We report up to four orders of magnitude ($times 10,000$) speedup in processing of mega-pixel size images.
arXiv Detail & Related papers (2020-06-17T17:10:43Z) - Neural Sparse Representation for Image Restoration [116.72107034624344]
Inspired by the robustness and efficiency of sparse coding based image restoration models, we investigate the sparsity of neurons in deep networks.
Our method structurally enforces sparsity constraints upon hidden neurons.
Experiments show that sparse representation is crucial in deep neural networks for multiple image restoration tasks.
arXiv Detail & Related papers (2020-06-08T05:15:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.