Low Complexity Sequential Search with Size-Dependent Measurement Noise
- URL: http://arxiv.org/abs/2005.07814v2
- Date: Tue, 1 Sep 2020 17:55:40 GMT
- Title: Low Complexity Sequential Search with Size-Dependent Measurement Noise
- Authors: Sung-En Chiu, Tara Javidi
- Abstract summary: This paper considers a target localization problem where at any given time an agent can choose a region to query for the presence of the target in that region.
Motivated by practical applications such as initial beam alignment in array processing, heavy hitter detection in networking, and visual search in robotics, we consider practically important complexity constraints/metrics.
Two novel search strategy, $dyaPM$ and $hiePM$, are proposed.
- Score: 19.201555109949677
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper considers a target localization problem where at any given time an
agent can choose a region to query for the presence of the target in that
region. The measurement noise is assumed to be increasing with the size of the
query region the agent chooses. Motivated by practical applications such as
initial beam alignment in array processing, heavy hitter detection in
networking, and visual search in robotics, we consider practically important
complexity constraints/metrics: \textit{time complexity}, \textit{computational
and memory complexity}, and the complexity of possible query sets in terms of
geometry and cardinality.
Two novel search strategy, $dyaPM$ and $hiePM$, are proposed. Pertinent to
the practicality of out solutions, $dyaPM$ and $hiePM$ are of a connected query
geometry (i.e. query set is always a connected set) implemented with low
computational and memory complexity. Additionally, $hiePM$ has a hierarchical
structure and, hence, a further reduction in the cardinality of possible query
sets, making $hiePM$ practically suitable for applications such as beamforming
in array processing where memory limitations favors a smaller codebook size.
Through a unified analysis with Extrinsic Jensen Shannon (EJS) Divergence,
$dyaPM$ is shown to be asymptotically optimal in search time complexity
(asymptotic in both resolution (rate) and error (reliability)). On the other
hand, $hiePM$ is shown to be near-optimal in rate. In addition, both $hiePM$
and $dyaPM$ are shown to outperform prior work in the non-asymptotic regime.
Related papers
- Enhanced Deterministic Approximation Algorithm for Non-monotone Submodular Maximization under Knapsack Constraint with Linear Query Complexity [0.0]
We improve the approximation factor of the fastest deterministic algorithm from $6+epsilon$ to $5+epsilon$.
Our technique is based on optimizing the performance of two components: the threshold greedy subroutine and the building of two disjoint sets as candidate solutions.
arXiv Detail & Related papers (2024-05-20T02:24:58Z) - Robust Approximation Algorithms for Non-monotone $k$-Submodular
Maximization under a Knapsack Constraint [0.0]
Two deterministic approximation algorithms are presented for the problem of non-monotone $k$-submodular complexity under a knapsack constraint.
Our algorithms provide constant approximation ratios within only $O(nk)$ query complexity for the non-monotone objective.
arXiv Detail & Related papers (2023-09-21T12:42:52Z) - Linear Query Approximation Algorithms for Non-monotone Submodular
Maximization under Knapsack Constraint [16.02833173359407]
This work introduces two constant factor approximation algorithms with linear query complexity for non-monotone submodular over a ground set of size $n$ subject to a knapsack constraint.
$mathsfDLA$ is a deterministic algorithm that provides an approximation factor of $6+epsilon$ while $mathsfRLA$ is a randomized algorithm with an approximation factor of $4+epsilon$.
arXiv Detail & Related papers (2023-05-17T15:27:33Z) - Optimal Horizon-Free Reward-Free Exploration for Linear Mixture MDPs [60.40452803295326]
We propose a new reward-free algorithm for learning linear Markov decision processes (MDPs)
At the core of our algorithm is uncertainty-weighted value-targeted regression with exploration-driven pseudo-reward.
We show that our algorithm only needs to explore $tilde O( d2varepsilon-2)$ episodes to find an $varepsilon$-optimal policy.
arXiv Detail & Related papers (2023-03-17T17:53:28Z) - PRECISION: Decentralized Constrained Min-Max Learning with Low
Communication and Sample Complexities [25.153506493249854]
We show an adaptive multi-agent learning technique for min-max optimization problems.
We also propose an algorithm called PRECISION that enjoys a reduction in the number of iterations.
arXiv Detail & Related papers (2023-03-05T00:26:10Z) - The Hypervolume Indicator Hessian Matrix: Analytical Expression,
Computational Time Complexity, and Sparsity [4.523133864190258]
This paper establishes the analytical expression of the Hessian matrix of the mapping from a (fixed size) collection of $n$ points in the $d$-dimensional decision space to the scalar hypervolume indicator value.
The Hessian matrix plays a crucial role in second-order methods, such as the Newton-Raphson optimization method, and it can be used for the verification of local optimal sets.
arXiv Detail & Related papers (2022-11-08T11:24:18Z) - Under-bagging Nearest Neighbors for Imbalanced Classification [63.026765294759876]
We propose an ensemble learning algorithm called textitunder-bagging $k$-NN (textitunder-bagging $k$-NN) for imbalanced classification problems.
arXiv Detail & Related papers (2021-09-01T14:10:38Z) - Fuzzy Clustering with Similarity Queries [56.96625809888241]
The fuzzy or soft objective is a popular generalization of the well-known $k$-means problem.
We show that by making few queries, the problem becomes easier to solve.
arXiv Detail & Related papers (2021-06-04T02:32:26Z) - Linear Bandit Algorithms with Sublinear Time Complexity [67.21046514005029]
We propose to accelerate existing linear bandit algorithms to achieve per-step time complexity sublinear in the number of arms $K$.
We show that our proposed algorithms can achieve $O(K1-alpha(T))$ per-step complexity for some $alpha(T) > 0$ and $widetilde O(stT)$ regret, where $T$ is the time horizon.
arXiv Detail & Related papers (2021-03-03T22:42:15Z) - Average-case Complexity of Teaching Convex Polytopes via Halfspace
Queries [55.28642461328172]
We show that the average-case teaching complexity is $Theta(d)$, which is in sharp contrast to the worst-case teaching complexity of $Theta(n)$.
Our insights allow us to establish a tight bound on the average-case complexity for $phi$-separable dichotomies.
arXiv Detail & Related papers (2020-06-25T19:59:24Z) - Computationally efficient sparse clustering [67.95910835079825]
We provide a finite sample analysis of a new clustering algorithm based on PCA.
We show that it achieves the minimax optimal misclustering rate in the regime $|theta infty$.
arXiv Detail & Related papers (2020-05-21T17:51:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.