Accelerating combinatorial filter reduction through constraints
- URL: http://arxiv.org/abs/2011.03471v1
- Date: Fri, 6 Nov 2020 16:52:06 GMT
- Title: Accelerating combinatorial filter reduction through constraints
- Authors: Yulin Zhang, Hazhar Rahmani, Dylan A. Shell, Jason M. O'Kane
- Abstract summary: This paper proposes a new formalization needing only a number of constraints, and characterizes these constraints in three different forms.
We show that constraints in conjunctive normal form capture the problem most effectively, leading to a method that outperforms others.
To leverage the observation, we introduce just-in-time generation of such constraints, which yields improvements in efficiency and has the potential to minimize large filters.
- Score: 37.032079828955425
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reduction of combinatorial filters involves compressing state representations
that robots use. Such optimization arises in automating the construction of
minimalist robots. But exact combinatorial filter reduction is an NP-complete
problem and all current techniques are either inexact or formalized with
exponentially many constraints. This paper proposes a new formalization needing
only a polynomial number of constraints, and characterizes these constraints in
three different forms: nonlinear, linear, and conjunctive normal form.
Empirical results show that constraints in conjunctive normal form capture the
problem most effectively, leading to a method that outperforms the others.
Further examination indicates that a substantial proportion of constraints
remain inactive during iterative filter reduction. To leverage this
observation, we introduce just-in-time generation of such constraints, which
yields improvements in efficiency and has the potential to minimize large
filters.
Related papers
- Sparse QUBO Formulation for Efficient Embedding via Network-Based Decomposition of Equality and Inequality Constraints [0.35331503620315147]
We propose a method to construct a significantly sparser logical QUBO model for equality and inequality constraints.<n>By adding auxiliary variables based on specific network structures, our approach decomposes the original constraint into smaller, more manageable constraints.<n> Experimental results on D-Wave's hardware show that our formulation leads to substantial reductions in the number of qubits required for embedding.
arXiv Detail & Related papers (2026-01-26T03:40:06Z) - Adaptive Sparsification for Linear Programming [3.735586259382096]
We introduce a generic framework for solving linear programs with many constraints $(n gg d)$ via adaptive sparsification.<n>We present a quantum version of Clarkson's algorithm that finds an exact solution to an LP using $tildeO(sqrtn d3)$ row-queries to the constraint matrix.<n>Second, our framework yields new state-of-the-art algorithms for mixed packing and covering problems when the packing constraints are simple''
arXiv Detail & Related papers (2025-10-09T15:36:00Z) - Single-loop Algorithms for Stochastic Non-convex Optimization with Weakly-Convex Constraints [49.76332265680669]
This paper examines a crucial subset of problems where both the objective and constraint functions are weakly convex.
Existing methods often face limitations, including slow convergence rates or reliance on double-loop designs.
We introduce a novel single-loop penalty-based algorithm to overcome these challenges.
arXiv Detail & Related papers (2025-04-21T17:15:48Z) - Filtered Markovian Projection: Dimensionality Reduction in Filtering for Stochastic Reaction Networks [0.9599644507730105]
A typical challenge in practical problems modeled by reaction networks (SRNs) is that only a few state variables can be dynamically observed.
We propose to use a dimensionality reduction technique based on the Markovian projection (MP), initially introduced for forward problems.
The novel method employs a reduced-variance particle filter for estimating the jump intensities of the projected model and solves the filtering equations in a low-dimensional space.
arXiv Detail & Related papers (2025-02-11T19:45:40Z) - ReducedLUT: Table Decomposition with "Don't Care" Conditions [2.2061683015812026]
This paper introduces ReducedLUT, a novel method to reduce the footprint of the LUTs by injecting don't cares into the compression process.
We demonstrate a particular application to machine learning; by replacing unobserved patterns within the training data of neural network models with don't cares, we enable greater compression with minimal model accuracy degradation.
arXiv Detail & Related papers (2024-12-24T18:11:01Z) - The Rank-Reduced Kalman Filter: Approximate Dynamical-Low-Rank Filtering
In High Dimensions [32.30527731746912]
We propose a novel approximate filtering and smoothing method which propagates lowrank approximations of low-rank matrices.
Our method reduces computational complexity from cubic (for the Kalman filter) to emphquadratic in the state-space size in the worst-case.
arXiv Detail & Related papers (2023-06-13T13:50:31Z) - Symmetric Tensor Networks for Generative Modeling and Constrained
Combinatorial Optimization [72.41480594026815]
Constrained optimization problems abound in industry, from portfolio optimization to logistics.
One of the major roadblocks in solving these problems is the presence of non-trivial hard constraints which limit the valid search space.
In this work, we encode arbitrary integer-valued equality constraints of the form Ax=b, directly into U(1) symmetric networks (TNs) and leverage their applicability as quantum-inspired generative models.
arXiv Detail & Related papers (2022-11-16T18:59:54Z) - Computational Doob's h-transforms for Online Filtering of Discretely
Observed Diffusions [65.74069050283998]
We propose a computational framework to approximate Doob's $h$-transforms.
The proposed approach can be orders of magnitude more efficient than state-of-the-art particle filters.
arXiv Detail & Related papers (2022-06-07T15:03:05Z) - Reverse image filtering using total derivative approximation and
accelerated gradient descent [82.93345261434943]
We address a new problem of reversing the effect of an image filter, which can be linear or nonlinear.
The assumption is that the algorithm of the filter is unknown and the filter is available as a black box.
We formulate this inverse problem as minimizing a local patch-based cost function and use total derivative to approximate the gradient which is used in gradient descent to solve the problem.
arXiv Detail & Related papers (2021-12-08T05:16:11Z) - Semi-Sparsity for Smoothing Filters [1.1404527665142667]
We show a new semi-sparsity smoothing algorithm based on a novel sparsity-inducing framework.
We show many benefits to a series of signal/image processing and computer vision applications.
arXiv Detail & Related papers (2021-07-01T17:31:42Z) - Filtering Rules for Flow Time Minimization in a Parallel Machine
Scheduling Problem [0.0]
This paper studies the scheduling of jobs of different families on parallel machines with qualification constraints.
The goal is to minimize both the flow time and the number of disqualifications.
This paper uses a-time algorithm which minimize the flow time for a single machine relaxation where disqualifications are not considered.
arXiv Detail & Related papers (2020-11-20T10:00:14Z) - A Scalable, Adaptive and Sound Nonconvex Regularizer for Low-rank Matrix
Completion [60.52730146391456]
We propose a new non scalable low-rank regularizer called "nuclear Frobenius norm" regularizer, which is adaptive and sound.
It bypasses the computation of singular values and allows fast optimization by algorithms.
It obtains state-of-the-art recovery performance while being the fastest in existing matrix learning methods.
arXiv Detail & Related papers (2020-08-14T18:47:58Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.