Scaling up Hybrid Probabilistic Inference with Logical and Arithmetic
Constraints via Message Passing
- URL: http://arxiv.org/abs/2003.00126v2
- Date: Wed, 19 Aug 2020 22:41:13 GMT
- Title: Scaling up Hybrid Probabilistic Inference with Logical and Arithmetic
Constraints via Message Passing
- Authors: Zhe Zeng, Paolo Morettin, Fanqi Yan, Antonio Vergari, Guy Van den
Broeck
- Abstract summary: Weighted model integration allows to express the complex dependencies of real-world problems.
Existing WMI solvers are not ready to scale to these problems.
We devise a scalable WMI solver based on message passing, MP-WMI.
- Score: 38.559697064390015
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Weighted model integration (WMI) is a very appealing framework for
probabilistic inference: it allows to express the complex dependencies of
real-world problems where variables are both continuous and discrete, via the
language of Satisfiability Modulo Theories (SMT), as well as to compute
probabilistic queries with complex logical and arithmetic constraints. Yet,
existing WMI solvers are not ready to scale to these problems. They either
ignore the intrinsic dependency structure of the problem at all, or they are
limited to too restrictive structures. To narrow this gap, we derive a
factorized formalism of WMI enabling us to devise a scalable WMI solver based
on message passing, MP-WMI. Namely, MP-WMI is the first WMI solver which allows
to: 1) perform exact inference on the full class of tree-structured WMI
problems; 2) compute all marginal densities in linear time; 3) amortize
inference inter query. Experimental results show that our solver dramatically
outperforms the existing WMI solvers on a large set of benchmarks.
Related papers
- Syntactic and Semantic Control of Large Language Models via Sequential Monte Carlo [90.78001821963008]
A wide range of LM applications require generating text that conforms to syntactic or semantic constraints.
We develop an architecture for controlled LM generation based on sequential Monte Carlo (SMC)
Our system builds on the framework of Lew et al. (2023) and integrates with its language model probabilistic programming language.
arXiv Detail & Related papers (2025-04-17T17:49:40Z) - Syzygy of Thoughts: Improving LLM CoT with the Minimal Free Resolution [59.39066657300045]
Chain-of-Thought (CoT) prompting enhances the reasoning of large language models (LLMs) by decomposing problems into sequential steps.
We propose Syzygy of Thoughts (SoT)-a novel framework that extends CoT by introducing auxiliary, interrelated reasoning paths.
SoT captures deeper logical dependencies, enabling more robust and structured problem-solving.
arXiv Detail & Related papers (2025-04-13T13:35:41Z) - Benchmarking Multi-modal Semantic Segmentation under Sensor Failures: Missing and Noisy Modality Robustness [61.87055159919641]
Multi-modal semantic segmentation (MMSS) addresses the limitations of single-modality data by integrating complementary information across modalities.
Despite notable progress, a significant gap persists between research and real-world deployment due to variability and uncertainty in multi-modal data quality.
We introduce a robustness benchmark that evaluates MMSS models under three scenarios: Entire-Missing Modality (EMM), Random-Missing Modality (RMM), and Noisy Modality (NM)
arXiv Detail & Related papers (2025-03-24T08:46:52Z) - Boost, Disentangle, and Customize: A Robust System2-to-System1 Pipeline for Code Generation [58.799397354312596]
Large language models (LLMs) have demonstrated remarkable capabilities in various domains, particularly in system 1 tasks.
Recent research on System2-to-System1 methods surge, exploring the System 2 reasoning knowledge via inference-time computation.
In this paper, we focus on code generation, which is a representative System 2 task, and identify two primary challenges.
arXiv Detail & Related papers (2025-02-18T03:20:50Z) - Multi-task Representation Learning for Mixed Integer Linear Programming [13.106799330951842]
This paper introduces the first multi-task learning framework for ML-guided MILP solving.
We demonstrate that our multi-task learning model performs similarly to specialized models within the same distribution.
It significantly outperforms them in generalization across problem sizes and tasks.
arXiv Detail & Related papers (2024-12-18T23:33:32Z) - MR-Ben: A Meta-Reasoning Benchmark for Evaluating System-2 Thinking in LLMs [55.20845457594977]
Large language models (LLMs) have shown increasing capability in problem-solving and decision-making.
We present a process-based benchmark MR-Ben that demands a meta-reasoning skill.
Our meta-reasoning paradigm is especially suited for system-2 slow thinking.
arXiv Detail & Related papers (2024-06-20T03:50:23Z) - A Deep Instance Generative Framework for MILP Solvers Under Limited Data
Availability [66.37474135424637]
We propose G2MILP, the first deep generative framework for MILP instances.
G2MILP represents MILP instances as bipartite graphs, and applies a masked variational autoencoder to iteratively corrupt and replace parts of the original graphs to generate new ones.
We design a suite of benchmarks to evaluate the quality of the generated MILP instances.
arXiv Detail & Related papers (2023-10-04T13:34:34Z) - Multi-Grained Multimodal Interaction Network for Entity Linking [65.30260033700338]
Multimodal entity linking task aims at resolving ambiguous mentions to a multimodal knowledge graph.
We propose a novel Multi-GraIned Multimodal InteraCtion Network $textbf(MIMIC)$ framework for solving the MEL task.
arXiv Detail & Related papers (2023-07-19T02:11:19Z) - Enhancing SMT-based Weighted Model Integration by Structure Awareness [10.812681884889697]
Weighted Model Integration (WMI) emerged as a unifying formalism for probabilistic inference in hybrid domains.
We develop an algorithm that combines SMT-based enumeration, an efficient technique in formal verification, with an effective encoding of the problem structure.
arXiv Detail & Related papers (2023-02-13T08:55:12Z) - A robust estimator of mutual information for deep learning
interpretability [2.574652392763709]
We present GMM-MI, an algorithm that can be applied to both discrete and continuous settings.
We extensively validate GMM-MI on toy data for which the ground truth MI is known.
We then demonstrate the use of our MI estimator in the context of representation learning.
arXiv Detail & Related papers (2022-10-31T18:00:02Z) - SMT-based Weighted Model Integration with Structure Awareness [18.615397594541665]
We develop an algorithm that combines SMT-based enumeration, an efficient technique in formal verification, with an effective encoding of the problem structure.
This allows our algorithm to avoid generating redundant models, resulting in substantial computational savings.
arXiv Detail & Related papers (2022-06-28T09:46:17Z) - FERMI: Fair Empirical Risk Minimization via Exponential R\'enyi Mutual
Information [17.57634911587209]
We show that ERMI is a strong fairness violation notion in the sense that it provides upper bound guarantees on existing notions of fairness violation.
We then propose the Fair Empirical Risk Minimization via ERMI regularization framework, called FERMI.
arXiv Detail & Related papers (2021-02-24T22:15:44Z) - Efficient semidefinite-programming-based inference for binary and
multi-class MRFs [83.09715052229782]
We propose an efficient method for computing the partition function or MAP estimate in a pairwise MRF.
We extend semidefinite relaxations from the typical binary MRF to the full multi-class setting, and develop a compact semidefinite relaxation that can again be solved efficiently using the solver.
arXiv Detail & Related papers (2020-12-04T15:36:29Z) - CLUB: A Contrastive Log-ratio Upper Bound of Mutual Information [105.73798100327667]
We propose a novel Contrastive Log-ratio Upper Bound (CLUB) of mutual information.
We provide a theoretical analysis of the properties of CLUB and its variational approximation.
Based on this upper bound, we introduce a MI minimization training scheme and further accelerate it with a negative sampling strategy.
arXiv Detail & Related papers (2020-06-22T05:36:16Z) - Monte Carlo Anti-Differentiation for Approximate Weighted Model
Integration [13.14502456511936]
We introduce textit Monte Carlo anti-differentiation (MCAD) which computes MC approximations of anti-derivatives.
Our experiments show that equipping existing WMI solvers with MCAD yields a fast yet reliable approximate inference scheme.
arXiv Detail & Related papers (2020-01-13T23:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.