Black-Box Saliency Map Generation Using Bayesian Optimisation
- URL: http://arxiv.org/abs/2001.11366v1
- Date: Thu, 30 Jan 2020 14:39:12 GMT
- Title: Black-Box Saliency Map Generation Using Bayesian Optimisation
- Authors: Mamuku Mokuwe, Michael Burke, Anna Sergeevna Bosman
- Abstract summary: Saliency maps are often used in computer vision to provide intuitive interpretations of what input regions a model has used to produce a specific prediction.
This work proposes an approach for saliency map generation for black-box models, where no access to model parameters is available.
- Score: 5.414308305392763
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Saliency maps are often used in computer vision to provide intuitive
interpretations of what input regions a model has used to produce a specific
prediction. A number of approaches to saliency map generation are available,
but most require access to model parameters. This work proposes an approach for
saliency map generation for black-box models, where no access to model
parameters is available, using a Bayesian optimisation sampling method. The
approach aims to find the global salient image region responsible for a
particular (black-box) model's prediction. This is achieved by a sampling-based
approach to model perturbations that seeks to localise salient regions of an
image to the black-box model. Results show that the proposed approach to
saliency map generation outperforms grid-based perturbation approaches, and
performs similarly to gradient-based approaches which require access to model
parameters.
Related papers
- Bayesian Inverse Graphics for Few-Shot Concept Learning [3.475273727432576]
We present a Bayesian model of perception that learns using only minimal data.
We show how this representation can be used for downstream tasks such as few-shot classification and estimation.
arXiv Detail & Related papers (2024-09-12T18:30:41Z) - Bridging Model-Based Optimization and Generative Modeling via Conservative Fine-Tuning of Diffusion Models [54.132297393662654]
We introduce a hybrid method that fine-tunes cutting-edge diffusion models by optimizing reward models through RL.
We demonstrate the capability of our approach to outperform the best designs in offline data, leveraging the extrapolation capabilities of reward models.
arXiv Detail & Related papers (2024-05-30T03:57:29Z) - Learning Gaussian Representation for Eye Fixation Prediction [54.88001757991433]
Existing eye fixation prediction methods perform the mapping from input images to the corresponding dense fixation maps generated from raw fixation points.
We introduce Gaussian Representation for eye fixation modeling.
We design our framework upon some lightweight backbones to achieve real-time fixation prediction.
arXiv Detail & Related papers (2024-03-21T20:28:22Z) - Black-Box Tuning of Vision-Language Models with Effective Gradient
Approximation [71.21346469382821]
We introduce collaborative black-box tuning (CBBT) for both textual prompt optimization and output feature adaptation for black-box models.
CBBT is extensively evaluated on eleven downstream benchmarks and achieves remarkable improvements compared to existing black-box VL adaptation methods.
arXiv Detail & Related papers (2023-12-26T06:31:28Z) - A transport approach to sequential simulation-based inference [0.0]
We present a new transport-based approach to efficiently perform sequential Bayesian inference of static model parameters.
The strategy is based on the extraction of conditional distribution from the joint distribution of parameters and data, via the estimation of structured (e.g., block triangular) transport maps.
This allow gradient-based characterizations of posterior density via transport maps in a model-free, online phase.
arXiv Detail & Related papers (2023-08-26T18:53:48Z) - PAMI: partition input and aggregate outputs for model interpretation [69.42924964776766]
In this study, a simple yet effective visualization framework called PAMI is proposed based on the observation that deep learning models often aggregate features from local regions for model predictions.
The basic idea is to mask majority of the input and use the corresponding model output as the relative contribution of the preserved input part to the original model prediction.
Extensive experiments on multiple tasks confirm the proposed method performs better than existing visualization approaches in more precisely finding class-specific input regions.
arXiv Detail & Related papers (2023-02-07T08:48:34Z) - Bayesian Neural Network Inference via Implicit Models and the Posterior
Predictive Distribution [0.8122270502556371]
We propose a novel approach to perform approximate Bayesian inference in complex models such as Bayesian neural networks.
The approach is more scalable to large data than Markov Chain Monte Carlo.
We see this being useful in applications such as surrogate and physics-based models.
arXiv Detail & Related papers (2022-09-06T02:43:19Z) - Model-Based Parameter Optimization for Ground Texture Based Localization
Methods [16.242924916178286]
A promising approach to accurate positioning of robots is ground texture based localization.
We deriving a prediction model for localization performance, which requires only a small collection of sample images of an application area.
arXiv Detail & Related papers (2021-09-03T14:29:36Z) - Transformer-based Map Matching Model with Limited Ground-Truth Data
using Transfer-Learning Approach [6.510061176722248]
In many trajectory-based applications, it is necessary to map raw GPS trajectories onto road networks in digital maps.
In this paper, we consider the map-matching task from the data perspective, proposing a deep learning-based map-matching model.
We generate synthetic trajectory data to pre-train the Transformer model and then fine-tune the model with a limited number of ground-truth data.
arXiv Detail & Related papers (2021-08-01T11:51:11Z) - CAMERAS: Enhanced Resolution And Sanity preserving Class Activation
Mapping for image saliency [61.40511574314069]
Backpropagation image saliency aims at explaining model predictions by estimating model-centric importance of individual pixels in the input.
We propose CAMERAS, a technique to compute high-fidelity backpropagation saliency maps without requiring any external priors.
arXiv Detail & Related papers (2021-06-20T08:20:56Z) - Oops I Took A Gradient: Scalable Sampling for Discrete Distributions [53.3142984019796]
We show that this approach outperforms generic samplers in a number of difficult settings.
We also demonstrate the use of our improved sampler for training deep energy-based models on high dimensional discrete data.
arXiv Detail & Related papers (2021-02-08T20:08:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.