Taming Discrete Integration via the Boon of Dimensionality
- URL: http://arxiv.org/abs/2010.10724v1
- Date: Wed, 21 Oct 2020 02:32:51 GMT
- Title: Taming Discrete Integration via the Boon of Dimensionality
- Authors: Jeffrey M. Dudek, Dror Fried, Kuldeep S. Meel
- Abstract summary: We propose an efficient reduction of discrete integration to model counting.
We perform detailed empirical analysis over benchmarks arising from neural network verification domains.
DeWeight is the first technique to compute estimates with provable guarantees for this class of benchmarks.
- Score: 36.55732373661026
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Discrete integration is a fundamental problem in computer science that
concerns the computation of discrete sums over exponentially large sets.
Despite intense interest from researchers for over three decades, the design of
scalable techniques for computing estimates with rigorous guarantees for
discrete integration remains the holy grail. The key contribution of this work
addresses this scalability challenge via an efficient reduction of discrete
integration to model counting. The proposed reduction is achieved via a
significant increase in the dimensionality that, contrary to conventional
wisdom, leads to solving an instance of the relatively simpler problem of model
counting.
Building on the promising approach proposed by Chakraborty et al, our work
overcomes the key weakness of their approach: a restriction to dyadic weights.
We augment our proposed reduction, called DeWeight, with a state of the art
efficient approximate model counter and perform detailed empirical analysis
over benchmarks arising from neural network verification domains, an emerging
application area of critical importance. DeWeight, to the best of our
knowledge, is the first technique to compute estimates with provable guarantees
for this class of benchmarks.
Related papers
- Predicting Probabilities of Error to Combine Quantization and Early Exiting: QuEE [68.6018458996143]
We propose a more general dynamic network that can combine both quantization and early exit dynamic network: QuEE.
Our algorithm can be seen as a form of soft early exiting or input-dependent compression.
The crucial factor of our approach is accurate prediction of the potential accuracy improvement achievable through further computation.
arXiv Detail & Related papers (2024-06-20T15:25:13Z) - Attention is Naturally Sparse with Gaussian Distributed Input [8.602260591839318]
This study presents a rigorous theoretical analysis of the sparsity in attention scores within Large Language Models (LLMs)
Our main contribution lies in providing a detailed theoretical examination of how sparsity manifests in attention mechanisms, offering insights into the potential trade-offs between computational savings and model effectiveness.
arXiv Detail & Related papers (2024-04-03T12:37:34Z) - XPose: eXplainable Human Pose Estimation [21.738680136615127]
XPose is a framework that incorporates Explainable AI (XAI) principles into pose estimation.
Group Shapley Value (GSV) organizes keypoints into clusters based on their interdependencies.
GSV meticulously calculates Shapley value for keypoints, while for inter-cluster keypoints, it opts for a more holistic group-level valuation.
arXiv Detail & Related papers (2024-03-19T02:29:34Z) - Neighbor-Aware Calibration of Segmentation Networks with Penalty-Based
Constraints [19.897181782914437]
We propose a principled and simple solution based on equality constraints on the logit values, which enables to control explicitly both the enforced constraint and the weight of the penalty.
Our approach can be used to train a wide span of deep segmentation networks.
arXiv Detail & Related papers (2024-01-25T19:46:57Z) - Fast Shapley Value Estimation: A Unified Approach [71.92014859992263]
We propose a straightforward and efficient Shapley estimator, SimSHAP, by eliminating redundant techniques.
In our analysis of existing approaches, we observe that estimators can be unified as a linear transformation of randomly summed values from feature subsets.
Our experiments validate the effectiveness of our SimSHAP, which significantly accelerates the computation of accurate Shapley values.
arXiv Detail & Related papers (2023-11-02T06:09:24Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - On Robust Numerical Solver for ODE via Self-Attention Mechanism [82.95493796476767]
We explore training efficient and robust AI-enhanced numerical solvers with a small data size by mitigating intrinsic noise disturbances.
We first analyze the ability of the self-attention mechanism to regulate noise in supervised learning and then propose a simple-yet-effective numerical solver, Attr, which introduces an additive self-attention mechanism to the numerical solution of differential equations.
arXiv Detail & Related papers (2023-02-05T01:39:21Z) - Toward Certified Robustness Against Real-World Distribution Shifts [65.66374339500025]
We train a generative model to learn perturbations from data and define specifications with respect to the output of the learned model.
A unique challenge arising from this setting is that existing verifiers cannot tightly approximate sigmoid activations.
We propose a general meta-algorithm for handling sigmoid activations which leverages classical notions of counter-example-guided abstraction refinement.
arXiv Detail & Related papers (2022-06-08T04:09:13Z) - Efficient Inference via Universal LSH Kernel [35.22983601434134]
We propose mathematically provable Representer Sketch, a concise set of count arrays that can approximate the inference procedure with simple hashing computations and aggregations.
Representer Sketch builds upon the popular Representer Theorem from kernel literature, hence the name.
We show that Representer Sketch achieves up to 114x reduction in storage requirement and 59x reduction in complexity without any drop in accuracy.
arXiv Detail & Related papers (2021-06-21T22:06:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.