Sparsity Agnostic Depth Completion
- URL: http://arxiv.org/abs/2212.00790v1
- Date: Thu, 1 Dec 2022 18:59:46 GMT
- Title: Sparsity Agnostic Depth Completion
- Authors: Andrea Conti, Matteo Poggi and Stefano Mattoccia
- Abstract summary: State-of-the-art approaches yield accurate results only when processing a specific density and distribution of input points.
Our solution is robust to uneven distributions and extremely low densities never witnessed during training.
- Score: 39.116228971420874
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel depth completion approach agnostic to the sparsity of
depth points, that is very likely to vary in many practical applications.
State-of-the-art approaches yield accurate results only when processing a
specific density and distribution of input points, i.e. the one observed during
training, narrowing their deployment in real use cases. On the contrary, our
solution is robust to uneven distributions and extremely low densities never
witnessed during training. Experimental results on standard indoor and outdoor
benchmarks highlight the robustness of our framework, achieving accuracy
comparable to state-of-the-art methods when tested with density and
distribution equal to the training one while being much more accurate in the
other cases. Our pretrained models and further material are available in our
project page.
Related papers
- Maximize margins for robust splicing detection [9.462149599416264]
We show that the same deep architecture can react very differently to unseen post-processing depending on the learned weights.<n>Our experiments reveal a strong correlation between the distribution of latent margins and a detector's ability to generalize to post-processed images.
arXiv Detail & Related papers (2025-07-28T08:20:46Z) - Generalizing to any diverse distribution: uniformity, gentle finetuning and rebalancing [55.791818510796645]
We aim to develop models that generalize well to any diverse test distribution, even if the latter deviates significantly from the training data.
Various approaches like domain adaptation, domain generalization, and robust optimization attempt to address the out-of-distribution challenge.
We adopt a more conservative perspective by accounting for the worst-case error across all sufficiently diverse test distributions within a known domain.
arXiv Detail & Related papers (2024-10-08T12:26:48Z) - Enhancing Out-of-Distribution Detection with Multitesting-based Layer-wise Feature Fusion [11.689517005768046]
Out-of-distribution samples may exhibit shifts in local or global features compared to the training distribution.
We propose a novel framework, Multitesting-based Layer-wise Out-of-Distribution (OOD) Detection.
Our scheme effectively enhances the performance of out-of-distribution detection when compared to baseline methods.
arXiv Detail & Related papers (2024-03-16T04:35:04Z) - Calibration-then-Calculation: A Variance Reduced Metric Framework in Deep Click-Through Rate Prediction Models [16.308958212406583]
There is a lack of focus on evaluating the performance of deep learning pipelines.
With the increased use of large datasets and complex models, the training process is run only once and the result is compared to previous benchmarks.
Traditional solutions, such as running the training process multiple times, are often infeasible due to computational constraints.
We introduce a novel metric framework, the Calibrated Loss Metric, designed to address this issue by reducing the variance present in its conventional counterpart.
arXiv Detail & Related papers (2024-01-30T02:38:23Z) - Bridging Precision and Confidence: A Train-Time Loss for Calibrating
Object Detection [58.789823426981044]
We propose a novel auxiliary loss formulation that aims to align the class confidence of bounding boxes with the accurateness of predictions.
Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios.
arXiv Detail & Related papers (2023-03-25T08:56:21Z) - Robust Calibration with Multi-domain Temperature Scaling [86.07299013396059]
We develop a systematic calibration model to handle distribution shifts by leveraging data from multiple domains.
Our proposed method -- multi-domain temperature scaling -- uses the robustness in the domains to improve calibration under distribution shift.
arXiv Detail & Related papers (2022-06-06T17:32:12Z) - Bayesian Framework for Gradient Leakage [8.583436410810203]
Federated learning is an established method for training machine learning models without sharing training data.
Recent work has shown that it cannot guarantee data privacy as shared gradients can still leak sensitive information.
We propose a theoretical framework that enables, for the first time, analysis of the Bayes optimal adversary phrased as an optimization problem.
arXiv Detail & Related papers (2021-11-08T18:35:40Z) - Squeezing Backbone Feature Distributions to the Max for Efficient
Few-Shot Learning [3.1153758106426603]
Few-shot classification is a challenging problem due to the uncertainty caused by using few labelled samples.
We propose a novel transfer-based method which aims at processing the feature vectors so that they become closer to Gaussian-like distributions.
In the case of transductive few-shot learning where unlabelled test samples are available during training, we also introduce an optimal-transport inspired algorithm to boost even further the achieved performance.
arXiv Detail & Related papers (2021-10-18T16:29:17Z) - On the Practicality of Deterministic Epistemic Uncertainty [106.06571981780591]
deterministic uncertainty methods (DUMs) achieve strong performance on detecting out-of-distribution data.
It remains unclear whether DUMs are well calibrated and can seamlessly scale to real-world applications.
arXiv Detail & Related papers (2021-07-01T17:59:07Z) - Learning Accurate Dense Correspondences and When to Trust Them [161.76275845530964]
We aim to estimate a dense flow field relating two images, coupled with a robust pixel-wise confidence map.
We develop a flexible probabilistic approach that jointly learns the flow prediction and its uncertainty.
Our approach obtains state-of-the-art results on challenging geometric matching and optical flow datasets.
arXiv Detail & Related papers (2021-01-05T18:54:11Z) - Multi-Loss Sub-Ensembles for Accurate Classification with Uncertainty
Estimation [1.2891210250935146]
We propose an efficient method for uncertainty estimation in deep neural networks (DNNs) achieving high accuracy.
We keep our inference time relatively low by leveraging the advantage proposed by the Deep-Sub-Ensembles method.
Our results show improved accuracy on the classification task and competitive results on several uncertainty measures.
arXiv Detail & Related papers (2020-10-05T10:59:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.