Probabilistic Pixel-Adaptive Refinement Networks
- URL: http://arxiv.org/abs/2003.14407v1
- Date: Tue, 31 Mar 2020 17:53:21 GMT
- Title: Probabilistic Pixel-Adaptive Refinement Networks
- Authors: Anne S. Wannenwetsch, Stefan Roth
- Abstract summary: Image-adaptive post-processing methods have shown beneficial by leveraging the high-resolution input image(s) as guidance data.
We introduce probabilistic pixel-adaptive convolutions (PPACs), which not only depend on image guidance data for filtering, but also respect the reliability of per-pixel predictions.
We demonstrate their utility in refinement networks for optical flow and semantic segmentation, where PPACs lead to a clear reduction in boundary artifacts.
- Score: 21.233814875276803
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Encoder-decoder networks have found widespread use in various dense
prediction tasks. However, the strong reduction of spatial resolution in the
encoder leads to a loss of location information as well as boundary artifacts.
To address this, image-adaptive post-processing methods have shown beneficial
by leveraging the high-resolution input image(s) as guidance data. We extend
such approaches by considering an important orthogonal source of information:
the network's confidence in its own predictions. We introduce probabilistic
pixel-adaptive convolutions (PPACs), which not only depend on image guidance
data for filtering, but also respect the reliability of per-pixel predictions.
As such, PPACs allow for image-adaptive smoothing and simultaneously
propagating pixels of high confidence into less reliable regions, while
respecting object boundaries. We demonstrate their utility in refinement
networks for optical flow and semantic segmentation, where PPACs lead to a
clear reduction in boundary artifacts. Moreover, our proposed refinement step
is able to substantially improve the accuracy on various widely used
benchmarks.
Related papers
- Single Image Depth Prediction Made Better: A Multivariate Gaussian Take [163.14849753700682]
We introduce an approach that performs continuous modeling of per-pixel depth.
Our method's accuracy (named MG) is among the top on the KITTI depth-prediction benchmark leaderboard.
arXiv Detail & Related papers (2023-03-31T16:01:03Z) - USegScene: Unsupervised Learning of Depth, Optical Flow and Ego-Motion
with Semantic Guidance and Coupled Networks [31.600708674008384]
USegScene is a framework for semantically guided unsupervised learning of depth, optical flow and ego-motion estimation for stereo camera images.
We present results on the popular KITTI dataset and show that our approach outperforms other methods by a large margin.
arXiv Detail & Related papers (2022-07-15T13:25:47Z) - A Probabilistic Deep Image Prior for Computational Tomography [0.19573380763700707]
Existing deep-learning based tomographic image reconstruction methods do not provide accurate estimates of reconstruction uncertainty.
We construct a Bayesian prior for tomographic reconstruction, which combines the classical total variation (TV) regulariser with the modern deep image prior (DIP)
For the inference, we develop an approach based on the linearised Laplace method, which is scalable to high-dimensional settings.
arXiv Detail & Related papers (2022-02-28T14:47:14Z) - PDC-Net+: Enhanced Probabilistic Dense Correspondence Network [161.76275845530964]
Enhanced Probabilistic Dense Correspondence Network, PDC-Net+, capable of estimating accurate dense correspondences.
We develop an architecture and an enhanced training strategy tailored for robust and generalizable uncertainty prediction.
Our approach obtains state-of-the-art results on multiple challenging geometric matching and optical flow datasets.
arXiv Detail & Related papers (2021-09-28T17:56:41Z) - Spatially-Adaptive Image Restoration using Distortion-Guided Networks [51.89245800461537]
We present a learning-based solution for restoring images suffering from spatially-varying degradations.
We propose SPAIR, a network design that harnesses distortion-localization information and dynamically adjusts to difficult regions in the image.
arXiv Detail & Related papers (2021-08-19T11:02:25Z) - A Novel Upsampling and Context Convolution for Image Semantic
Segmentation [0.966840768820136]
Recent methods for semantic segmentation often employ an encoder-decoder structure using deep convolutional neural networks.
We propose a dense upsampling convolution method based on guided filtering to effectively preserve the spatial information of the image in the network.
We report a new record of 82.86% and 81.62% of pixel accuracy on ADE20K and Pascal-Context benchmark datasets, respectively.
arXiv Detail & Related papers (2021-03-20T06:16:42Z) - AINet: Association Implantation for Superpixel Segmentation [82.21559299694555]
We propose a novel textbfAssociation textbfImplantation (AI) module to enable the network to explicitly capture the relations between the pixel and its surrounding grids.
Our method could not only achieve state-of-the-art performance but maintain satisfactory inference efficiency.
arXiv Detail & Related papers (2021-01-26T10:40:13Z) - An Empirical Method to Quantify the Peripheral Performance Degradation
in Deep Networks [18.808132632482103]
convolutional neural network (CNN) kernels compound with each convolutional layer.
Deeper and deeper networks combined with stride-based down-sampling means that the propagation of this region can end up covering a non-negligable portion of the image.
Our dataset is constructed by inserting objects into high resolution backgrounds, thereby allowing us to crop sub-images which place target objects at specific locations relative to the image border.
By probing the behaviour of Mask R-CNN across a selection of target locations, we see clear patterns of performance degredation near the image boundary, and in particular in the image corners.
arXiv Detail & Related papers (2020-12-04T18:00:47Z) - Unsupervised Metric Relocalization Using Transform Consistency Loss [66.19479868638925]
Training networks to perform metric relocalization traditionally requires accurate image correspondences.
We propose a self-supervised solution, which exploits a key insight: localizing a query image within a map should yield the same absolute pose, regardless of the reference image used for registration.
We evaluate our framework on synthetic and real-world data, showing our approach outperforms other supervised methods when a limited amount of ground-truth information is available.
arXiv Detail & Related papers (2020-11-01T19:24:27Z) - Deformable spatial propagation network for depth completion [2.5306673456895306]
We propose a deformable spatial propagation network (DSPN) to adaptively generates different receptive field and affinity matrix for each pixel.
It allows the network obtain information with much fewer but more relevant pixels for propagation.
arXiv Detail & Related papers (2020-07-08T16:39:50Z) - Resolution Adaptive Networks for Efficient Inference [53.04907454606711]
We propose a novel Resolution Adaptive Network (RANet), which is inspired by the intuition that low-resolution representations are sufficient for classifying "easy" inputs.
In RANet, the input images are first routed to a lightweight sub-network that efficiently extracts low-resolution representations.
High-resolution paths in the network maintain the capability to recognize the "hard" samples.
arXiv Detail & Related papers (2020-03-16T16:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.