Taming Discrete Integration via the Boon of Dimensionality
- URL: http://arxiv.org/abs/2010.10724v1
- Date: Wed, 21 Oct 2020 02:32:51 GMT
- Title: Taming Discrete Integration via the Boon of Dimensionality
- Authors: Jeffrey M. Dudek, Dror Fried, Kuldeep S. Meel
- Abstract summary: We propose an efficient reduction of discrete integration to model counting.
We perform detailed empirical analysis over benchmarks arising from neural network verification domains.
DeWeight is the first technique to compute estimates with provable guarantees for this class of benchmarks.
- Score: 36.55732373661026
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Discrete integration is a fundamental problem in computer science that
concerns the computation of discrete sums over exponentially large sets.
Despite intense interest from researchers for over three decades, the design of
scalable techniques for computing estimates with rigorous guarantees for
discrete integration remains the holy grail. The key contribution of this work
addresses this scalability challenge via an efficient reduction of discrete
integration to model counting. The proposed reduction is achieved via a
significant increase in the dimensionality that, contrary to conventional
wisdom, leads to solving an instance of the relatively simpler problem of model
counting.
Building on the promising approach proposed by Chakraborty et al, our work
overcomes the key weakness of their approach: a restriction to dyadic weights.
We augment our proposed reduction, called DeWeight, with a state of the art
efficient approximate model counter and perform detailed empirical analysis
over benchmarks arising from neural network verification domains, an emerging
application area of critical importance. DeWeight, to the best of our
knowledge, is the first technique to compute estimates with provable guarantees
for this class of benchmarks.
Related papers
- Decentralized Inference for Spatial Data Using Low-Rank Models [4.168323530566095]
This paper presents a decentralized framework tailored for parameter inference in spatial low-rank models.
A key obstacle arises from the spatial dependence among observations, which prevents the log-likelihood from being expressed as a summation.
Our approach employs a block descent method integrated with multi-consensus and dynamic consensus averaging for effective parameter optimization.
arXiv Detail & Related papers (2025-02-01T04:17:01Z) - Predicting Probabilities of Error to Combine Quantization and Early Exiting: QuEE [68.6018458996143]
We propose a more general dynamic network that can combine both quantization and early exit dynamic network: QuEE.
Our algorithm can be seen as a form of soft early exiting or input-dependent compression.
The crucial factor of our approach is accurate prediction of the potential accuracy improvement achievable through further computation.
arXiv Detail & Related papers (2024-06-20T15:25:13Z) - XPose: eXplainable Human Pose Estimation [21.738680136615127]
XPose is a framework that incorporates Explainable AI (XAI) principles into pose estimation.
Group Shapley Value (GSV) organizes keypoints into clusters based on their interdependencies.
GSV meticulously calculates Shapley value for keypoints, while for inter-cluster keypoints, it opts for a more holistic group-level valuation.
arXiv Detail & Related papers (2024-03-19T02:29:34Z) - Neighbor-Aware Calibration of Segmentation Networks with Penalty-Based
Constraints [19.897181782914437]
We propose a principled and simple solution based on equality constraints on the logit values, which enables to control explicitly both the enforced constraint and the weight of the penalty.
Our approach can be used to train a wide span of deep segmentation networks.
arXiv Detail & Related papers (2024-01-25T19:46:57Z) - Fast Shapley Value Estimation: A Unified Approach [71.92014859992263]
We propose a straightforward and efficient Shapley estimator, SimSHAP, by eliminating redundant techniques.
In our analysis of existing approaches, we observe that estimators can be unified as a linear transformation of randomly summed values from feature subsets.
Our experiments validate the effectiveness of our SimSHAP, which significantly accelerates the computation of accurate Shapley values.
arXiv Detail & Related papers (2023-11-02T06:09:24Z) - Equation Discovery with Bayesian Spike-and-Slab Priors and Efficient Kernels [57.46832672991433]
We propose a novel equation discovery method based on Kernel learning and BAyesian Spike-and-Slab priors (KBASS)
We use kernel regression to estimate the target function, which is flexible, expressive, and more robust to data sparsity and noises.
We develop an expectation-propagation expectation-maximization algorithm for efficient posterior inference and function estimation.
arXiv Detail & Related papers (2023-10-09T03:55:09Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Trust your neighbours: Penalty-based constraints for model calibration [19.437451462590108]
We present a constrained optimization perspective of SVLS and demonstrate that it enforces an implicit constraint on soft class proportions of surrounding pixels.
We propose a principled and simple solution based on equality constraints on the logit values, which enables to control explicitly both the enforced constraint and the weight of the penalty.
arXiv Detail & Related papers (2023-03-11T01:10:26Z) - Toward Certified Robustness Against Real-World Distribution Shifts [65.66374339500025]
We train a generative model to learn perturbations from data and define specifications with respect to the output of the learned model.
A unique challenge arising from this setting is that existing verifiers cannot tightly approximate sigmoid activations.
We propose a general meta-algorithm for handling sigmoid activations which leverages classical notions of counter-example-guided abstraction refinement.
arXiv Detail & Related papers (2022-06-08T04:09:13Z) - Efficient Inference via Universal LSH Kernel [35.22983601434134]
We propose mathematically provable Representer Sketch, a concise set of count arrays that can approximate the inference procedure with simple hashing computations and aggregations.
Representer Sketch builds upon the popular Representer Theorem from kernel literature, hence the name.
We show that Representer Sketch achieves up to 114x reduction in storage requirement and 59x reduction in complexity without any drop in accuracy.
arXiv Detail & Related papers (2021-06-21T22:06:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.