A Parallel Implementation of Computing Mean Average Precision
- URL: http://arxiv.org/abs/2206.09504v1
- Date: Sun, 19 Jun 2022 23:23:52 GMT
- Title: A Parallel Implementation of Computing Mean Average Precision
- Authors: Beinan Wang
- Abstract summary: Mean Average Precision (mAP) has been widely used for evaluating the quality of object detectors.
Current implementations can only count true positives (TP's) and false positives (FP's) for one class at a time.
We propose a parallelized alternative that can process mini-batches of detected bounding boxes.
- Score: 0.130536490219656
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Mean Average Precision (mAP) has been widely used for evaluating the quality
of object detectors, but an efficient implementation is still absent. Current
implementations can only count true positives (TP's) and false positives (FP's)
for one class at a time by looping through every detection of that class
sequentially. Not only are these approaches inefficient, but they are also
inconvenient for reporting validation mAP during training. We propose a
parallelized alternative that can process mini-batches of detected bounding
boxes (DTBB's) and ground truth bounding boxes (GTBB's) as inference goes such
that mAP can be instantly calculated after inference is finished. Loops and
control statements in sequential implementations are replaced with extensive
uses of broadcasting, masking, and indexing. All operators involved are
supported by popular machine learning frameworks such as PyTorch and
TensorFlow. As a result, our implementation is much faster and can easily fit
into typical training routines. A PyTorch version of our implementation is
available at https://github.com/bwangca/fast-map.
Related papers
- Prism: Efficient Test-Time Scaling via Hierarchical Search and Self-Verification for Discrete Diffusion Language Models [96.0074341403456]
Inference-time compute has re-emerged as a practical way to improve LLM reasoning.<n>Most test-time scaling (TTS) algorithms rely on autoregressive decoding.<n>We propose Prism, an efficient TTS framework for dLLMs.
arXiv Detail & Related papers (2026-02-02T09:14:51Z) - Investigating task-specific prompts and sparse autoencoders for activation monitoring [0.0]
Internal activations of language models encode additional information that could be useful for this.
Recent work has proposed several approaches which may improve on naive linear probing.
We develop and test novel refinements of these methods and compare them against each other.
arXiv Detail & Related papers (2025-04-28T21:28:17Z) - Loop unrolling: formal definition and application to testing [33.432652829284244]
Testing processes usually aim at high coverage, but loops severely limit coverage ambitions since the number of iterations is generally not predictable.
This article provides a formal definition and a set of formal properties of unrolling.
Using this definition as the conceptual basis, we have applied an unrolling strategy to an existing automated testing framework.
arXiv Detail & Related papers (2025-02-21T15:36:21Z) - BayOTIDE: Bayesian Online Multivariate Time series Imputation with functional decomposition [31.096125530322933]
In real-world scenarios like traffic and energy, massive time-series data with missing values and noises are widely observed, even sampled irregularly.
While many imputation methods have been proposed, most of them work with a local horizon.
Almost all methods assume the observations are sampled at regular time stamps, and fail to handle complex irregular sampled time series.
arXiv Detail & Related papers (2023-08-28T21:17:12Z) - (Almost) Provable Error Bounds Under Distribution Shift via Disagreement
Discrepancy [8.010528849585937]
We derive an (almost) guaranteed upper bound on the error of deep neural networks under distribution shift using unlabeled test data.
In particular, our bound requires a simple, intuitive condition which is well justified by prior empirical works.
We expect this loss can serve as a drop-in replacement for future methods which require maximizing multiclass disagreement.
arXiv Detail & Related papers (2023-06-01T03:22:15Z) - AdaNPC: Exploring Non-Parametric Classifier for Test-Time Adaptation [64.9230895853942]
Domain generalization can be arbitrarily hard without exploiting target domain information.
Test-time adaptive (TTA) methods are proposed to address this issue.
In this work, we adopt Non-Parametric to perform the test-time Adaptation (AdaNPC)
arXiv Detail & Related papers (2023-04-25T04:23:13Z) - Unified Functional Hashing in Automatic Machine Learning [58.77232199682271]
We show that large efficiency gains can be obtained by employing a fast unified functional hash.
Our hash is "functional" in that it identifies equivalent candidates even if they were represented or coded differently.
We show dramatic improvements on multiple AutoML domains, including neural architecture search and algorithm discovery.
arXiv Detail & Related papers (2023-02-10T18:50:37Z) - Generalized Differentiable RANSAC [95.95627475224231]
$nabla$-RANSAC is a differentiable RANSAC that allows learning the entire randomized robust estimation pipeline.
$nabla$-RANSAC is superior to the state-of-the-art in terms of accuracy while running at a similar speed to its less accurate alternatives.
arXiv Detail & Related papers (2022-12-26T15:13:13Z) - Intersection of Parallels as an Early Stopping Criterion [64.8387564654474]
We propose a method to spot an early stopping point in the training iterations without the need for a validation set.
For a wide range of learning rates, our method, called Cosine-Distance Criterion (CDC), leads to better generalization on average than all the methods that we compare against.
arXiv Detail & Related papers (2022-08-19T19:42:41Z) - General Cutting Planes for Bound-Propagation-Based Neural Network
Verification [144.7290035694459]
We generalize the bound propagation procedure to allow the addition of arbitrary cutting plane constraints.
We find that MIP solvers can generate high-quality cutting planes for strengthening bound-propagation-based verifiers.
Our method is the first verifier that can completely solve the oval20 benchmark and verify twice as many instances on the oval21 benchmark.
arXiv Detail & Related papers (2022-08-11T10:31:28Z) - Formalizing Preferences Over Runtime Distributions [25.899669128438322]
We use a utility-theoretic approach to characterize the scoring functions that describe preferences over algorithms.
We show how to leverage a maximum-entropy approach for modeling underspecified captime distributions.
arXiv Detail & Related papers (2022-05-25T19:43:48Z) - Efficient algorithms for implementing incremental proximal-point methods [0.3263412255491401]
In machine learning, model training algorithms observe a small portion of the training set in each computational step.
Several streams of research attempt to exploit more information about the cost functions than just their gradients via the well-known proximal operators.
We devise a novel algorithmic framework, which exploits convex duality theory to achieve both algorithmic efficiency and software modularity of proximal operator.
arXiv Detail & Related papers (2022-05-03T12:43:26Z) - Large-scale Optimization of Partial AUC in a Range of False Positive
Rates [51.12047280149546]
The area under the ROC curve (AUC) is one of the most widely used performance measures for classification models in machine learning.
We develop an efficient approximated gradient descent method based on recent practical envelope smoothing technique.
Our proposed algorithm can also be used to minimize the sum of some ranked range loss, which also lacks efficient solvers.
arXiv Detail & Related papers (2022-03-03T03:46:18Z) - Convolutional Sparse Coding Fast Approximation with Application to
Seismic Reflectivity Estimation [9.005280130480308]
We propose a speed-up upgraded version of the classic iterative thresholding algorithm, that produces a good approximation of the convolutional sparse code within 2-5 iterations.
The performance of the proposed solution is demonstrated via the seismic inversion problem in both synthetic and real data scenarios.
arXiv Detail & Related papers (2021-06-29T12:19:07Z) - Beta-CROWN: Efficient Bound Propagation with Per-neuron Split
Constraints for Complete and Incomplete Neural Network Verification [151.62491805851107]
We develop $beta$-CROWN, a bound propagation based verifier that can fully encode per-neuron splits.
$beta$-CROWN is close to three orders of magnitude faster than LP-based BaB methods for robustness verification.
By terminating BaB early, our method can also be used for incomplete verification.
arXiv Detail & Related papers (2021-03-11T11:56:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.