Great Models Think Alike: Improving Model Reliability via Inter-Model
Latent Agreement
- URL: http://arxiv.org/abs/2305.01481v1
- Date: Tue, 2 May 2023 15:02:17 GMT
- Title: Great Models Think Alike: Improving Model Reliability via Inter-Model
Latent Agreement
- Authors: Ailin Deng, Miao Xiong, Bryan Hooi
- Abstract summary: We estimate a model's reliability by measuring emphthe agreement between its latent space, and the latent space of a foundation model.
To overcome this incoherence issue, we design a emphneighborhood agreement measure between latent spaces.
We show that fusing neighborhood agreement into a model's predictive confidence in a post-hoc way significantly improves its reliability.
- Score: 27.067551390457567
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reliable application of machine learning is of primary importance to the
practical deployment of deep learning methods. A fundamental challenge is that
models are often unreliable due to overconfidence. In this paper, we estimate a
model's reliability by measuring \emph{the agreement between its latent space,
and the latent space of a foundation model}. However, it is challenging to
measure the agreement between two different latent spaces due to their
incoherence, \eg, arbitrary rotations and different dimensionality. To overcome
this incoherence issue, we design a \emph{neighborhood agreement measure}
between latent spaces and find that this agreement is surprisingly
well-correlated with the reliability of a model's predictions. Further, we show
that fusing neighborhood agreement into a model's predictive confidence in a
post-hoc way significantly improves its reliability. Theoretical analysis and
extensive experiments on failure detection across various datasets verify the
effectiveness of our method on both in-distribution and out-of-distribution
settings.
Related papers
- ReliOcc: Towards Reliable Semantic Occupancy Prediction via Uncertainty Learning [26.369237406972577]
Vision-centric semantic occupancy prediction plays a crucial role in autonomous driving.
There is still few research effort to explore the reliability in predicting semantic occupancy from camera.
We propose ReliOcc, a method designed to enhance the reliability of camera-based occupancy networks.
arXiv Detail & Related papers (2024-09-26T16:33:16Z) - The Risk of Federated Learning to Skew Fine-Tuning Features and
Underperform Out-of-Distribution Robustness [50.52507648690234]
Federated learning has the risk of skewing fine-tuning features and compromising the robustness of the model.
We introduce three robustness indicators and conduct experiments across diverse robust datasets.
Our approach markedly enhances the robustness across diverse scenarios, encompassing various parameter-efficient fine-tuning methods.
arXiv Detail & Related papers (2024-01-25T09:18:51Z) - Measuring and Modeling Uncertainty Degree for Monocular Depth Estimation [50.920911532133154]
The intrinsic ill-posedness and ordinal-sensitive nature of monocular depth estimation (MDE) models pose major challenges to the estimation of uncertainty degree.
We propose to model the uncertainty of MDE models from the perspective of the inherent probability distributions.
By simply introducing additional training regularization terms, our model, with surprisingly simple formations and without requiring extra modules or multiple inferences, can provide uncertainty estimations with state-of-the-art reliability.
arXiv Detail & Related papers (2023-07-19T12:11:15Z) - Birds of a Feather Trust Together: Knowing When to Trust a Classifier
via Adaptive Neighborhood Aggregation [30.34223543030105]
We show how NeighborAgg can leverage the two essential information via an adaptive neighborhood aggregation.
We also extend our approach to the closely related task of mislabel detection and provide a theoretical coverage guarantee to bound the false negative.
arXiv Detail & Related papers (2022-11-29T18:43:15Z) - Fairness Increases Adversarial Vulnerability [50.90773979394264]
This paper shows the existence of a dichotomy between fairness and robustness, and analyzes when achieving fairness decreases the model robustness to adversarial samples.
Experiments on non-linear models and different architectures validate the theoretical findings in multiple vision domains.
The paper proposes a simple, yet effective, solution to construct models achieving good tradeoffs between fairness and robustness.
arXiv Detail & Related papers (2022-11-21T19:55:35Z) - Reliability-Aware Prediction via Uncertainty Learning for Person Image
Retrieval [51.83967175585896]
UAL aims at providing reliability-aware predictions by considering data uncertainty and model uncertainty simultaneously.
Data uncertainty captures the noise" inherent in the sample, while model uncertainty depicts the model's confidence in the sample's prediction.
arXiv Detail & Related papers (2022-10-24T17:53:20Z) - Improving the Reliability for Confidence Estimation [16.952133489480776]
Confidence estimation is a task that aims to evaluate the trustworthiness of the model's prediction output during deployment.
Previous works have outlined two important qualities that a reliable confidence estimation model should possess.
We propose a meta-learning framework that can simultaneously improve upon both qualities in a confidence estimation model.
arXiv Detail & Related papers (2022-10-13T06:34:23Z) - Learning Accurate Dense Correspondences and When to Trust Them [161.76275845530964]
We aim to estimate a dense flow field relating two images, coupled with a robust pixel-wise confidence map.
We develop a flexible probabilistic approach that jointly learns the flow prediction and its uncertainty.
Our approach obtains state-of-the-art results on challenging geometric matching and optical flow datasets.
arXiv Detail & Related papers (2021-01-05T18:54:11Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - Modal Uncertainty Estimation via Discrete Latent Representation [4.246061945756033]
We introduce a deep learning framework that learns the one-to-many mappings between the inputs and outputs, together with faithful uncertainty measures.
Our framework demonstrates significantly more accurate uncertainty estimation than the current state-of-the-art methods.
arXiv Detail & Related papers (2020-07-25T05:29:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.