Training and Predicting Visual Error for Real-Time Applications
- URL: http://arxiv.org/abs/2310.09125v1
- Date: Fri, 13 Oct 2023 14:14:00 GMT
- Title: Training and Predicting Visual Error for Real-Time Applications
- Authors: Jo\~ao Lib\'orio Cardoso, Bernhard Kerbl, Lei Yang, Yury Uralsky,
Michael Wimmer
- Abstract summary: We explore the abilities of convolutional neural networks to predict a variety of visual metrics without requiring either reference or rendered images.
Our solution combines image-space information that is readily available in most state-of-the-art deferred shading pipelines with reprojection from previous frames to enable an adequate estimate of visual errors.
- Score: 6.687091041822445
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Visual error metrics play a fundamental role in the quantification of
perceived image similarity. Most recently, use cases for them in real-time
applications have emerged, such as content-adaptive shading and shading reuse
to increase performance and improve efficiency. A wide range of different
metrics has been established, with the most sophisticated being capable of
capturing the perceptual characteristics of the human visual system. However,
their complexity, computational expense, and reliance on reference images to
compare against prevent their generalized use in real-time, restricting such
applications to using only the simplest available metrics. In this work, we
explore the abilities of convolutional neural networks to predict a variety of
visual metrics without requiring either reference or rendered images.
Specifically, we train and deploy a neural network to estimate the visual error
resulting from reusing shading or using reduced shading rates. The resulting
models account for 70%-90% of the variance while achieving up to an order of
magnitude faster computation times. Our solution combines image-space
information that is readily available in most state-of-the-art deferred shading
pipelines with reprojection from previous frames to enable an adequate estimate
of visual errors, even in previously unseen regions. We describe a suitable
convolutional network architecture and considerations for data preparation for
training. We demonstrate the capability of our network to predict complex error
metrics at interactive rates in a real-time application that implements
content-adaptive shading in a deferred pipeline. Depending on the portion of
unseen image regions, our approach can achieve up to $2\times$ performance
compared to state-of-the-art methods.
Related papers
- Relearning Forgotten Knowledge: on Forgetting, Overfit and Training-Free
Ensembles of DNNs [9.010643838773477]
We introduce a novel score for quantifying overfit, which monitors the forgetting rate of deep models on validation data.
We show that overfit can occur with and without a decrease in validation accuracy, and may be more common than previously appreciated.
We use our observations to construct a new ensemble method, based solely on the training history of a single network, which provides significant improvement without any additional cost in training time.
arXiv Detail & Related papers (2023-10-17T09:22:22Z) - TIDE: Temporally Incremental Disparity Estimation via Pattern Flow in
Structured Light System [17.53719804060679]
TIDE-Net is a learning-based technique for disparity computation in mono-camera structured light systems.
We exploit the deformation of projected patterns (named pattern flow) on captured image sequences to model the temporal information.
For each incoming frame, our model fuses correlation volumes (from current frame) and disparity (from former frame) warped by pattern flow.
arXiv Detail & Related papers (2023-10-13T07:55:33Z) - Augmenting Deep Learning Adaptation for Wearable Sensor Data through
Combined Temporal-Frequency Image Encoding [4.458210211781739]
We present a novel modified-recurrent plot-based image representation that seamlessly integrates both temporal and frequency domain information.
We evaluate the proposed method using accelerometer-based activity recognition data and a pretrained ResNet model, and demonstrate its superior performance compared to existing approaches.
arXiv Detail & Related papers (2023-07-03T09:29:27Z) - FoVolNet: Fast Volume Rendering using Foveated Deep Neural Networks [33.489890950757975]
FoVolNet is a method to significantly increase the performance of volume data visualization.
We develop a cost-effective foveated rendering pipeline that sparsely samples a volume around a focal point and reconstructs the full-frame using a deep neural network.
arXiv Detail & Related papers (2022-09-20T19:48:56Z) - ZippyPoint: Fast Interest Point Detection, Description, and Matching
through Mixed Precision Discretization [71.91942002659795]
We investigate and adapt network quantization techniques to accelerate inference and enable its use on compute limited platforms.
ZippyPoint, our efficient quantized network with binary descriptors, improves the network runtime speed, the descriptor matching speed, and the 3D model size.
These improvements come at a minor performance degradation as evaluated on the tasks of homography estimation, visual localization, and map-free visual relocalization.
arXiv Detail & Related papers (2022-03-07T18:59:03Z) - RegNeRF: Regularizing Neural Radiance Fields for View Synthesis from
Sparse Inputs [79.00855490550367]
We show that NeRF can produce photorealistic renderings of unseen viewpoints when many input views are available.
We address this by regularizing the geometry and appearance of patches rendered from unobserved viewpoints.
Our model outperforms not only other methods that optimize over a single scene, but also conditional models that are extensively pre-trained on large multi-view datasets.
arXiv Detail & Related papers (2021-12-01T18:59:46Z) - Leveraging Self-Supervision for Cross-Domain Crowd Counting [71.75102529797549]
State-of-the-art methods for counting people in crowded scenes rely on deep networks to estimate crowd density.
We train our network to recognize upside-down real images from regular ones and incorporate into it the ability to predict its own uncertainty.
This yields an algorithm that consistently outperforms state-of-the-art cross-domain crowd counting ones without any extra computation at inference time.
arXiv Detail & Related papers (2021-03-30T12:37:55Z) - Transformer Guided Geometry Model for Flow-Based Unsupervised Visual
Odometry [38.20137500372927]
We propose a method consisting of two camera pose estimators that deal with the information from pairwise images.
For image sequences, a Transformer-like structure is adopted to build a geometry model over a local temporal window.
A Flow-to-Flow Pose Estimator (F2FPE) is proposed to exploit the relationship between pairwise images.
arXiv Detail & Related papers (2020-12-08T19:39:26Z) - Unsupervised Metric Relocalization Using Transform Consistency Loss [66.19479868638925]
Training networks to perform metric relocalization traditionally requires accurate image correspondences.
We propose a self-supervised solution, which exploits a key insight: localizing a query image within a map should yield the same absolute pose, regardless of the reference image used for registration.
We evaluate our framework on synthetic and real-world data, showing our approach outperforms other supervised methods when a limited amount of ground-truth information is available.
arXiv Detail & Related papers (2020-11-01T19:24:27Z) - Object-based Illumination Estimation with Rendering-aware Neural
Networks [56.01734918693844]
We present a scheme for fast environment light estimation from the RGBD appearance of individual objects and their local image areas.
With the estimated lighting, virtual objects can be rendered in AR scenarios with shading that is consistent to the real scene.
arXiv Detail & Related papers (2020-08-06T08:23:19Z) - Unsupervised Learning of Visual Features by Contrasting Cluster
Assignments [57.33699905852397]
We propose an online algorithm, SwAV, that takes advantage of contrastive methods without requiring to compute pairwise comparisons.
Our method simultaneously clusters the data while enforcing consistency between cluster assignments.
Our method can be trained with large and small batches and can scale to unlimited amounts of data.
arXiv Detail & Related papers (2020-06-17T14:00:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.