Random Boxes Are Open-world Object Detectors
- URL: http://arxiv.org/abs/2307.08249v1
- Date: Mon, 17 Jul 2023 05:08:32 GMT
- Title: Random Boxes Are Open-world Object Detectors
- Authors: Yanghao Wang, Zhongqi Yue, Xian-Sheng Hua, Hanwang Zhang
- Abstract summary: We show that classifiers trained with random region proposals achieve state-of-the-art Open-world Object Detection (OWOD)
We propose RandBox, a Fast R-CNN based architecture trained on random proposals at each training.
RandBox significantly outperforms the previous state-of-the-art in all metrics.
- Score: 71.86454597677387
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We show that classifiers trained with random region proposals achieve
state-of-the-art Open-world Object Detection (OWOD): they can not only maintain
the accuracy of the known objects (w/ training labels), but also considerably
improve the recall of unknown ones (w/o training labels). Specifically, we
propose RandBox, a Fast R-CNN based architecture trained on random proposals at
each training iteration, surpassing existing Faster R-CNN and Transformer based
OWOD. Its effectiveness stems from the following two benefits introduced by
randomness. First, as the randomization is independent of the distribution of
the limited known objects, the random proposals become the instrumental
variable that prevents the training from being confounded by the known objects.
Second, the unbiased training encourages more proposal explorations by using
our proposed matching score that does not penalize the random proposals whose
prediction scores do not match the known objects. On two benchmarks:
Pascal-VOC/MS-COCO and LVIS, RandBox significantly outperforms the previous
state-of-the-art in all metrics. We also detail the ablations on randomization
and loss designs. Codes are available at https://github.com/scuwyh2000/RandBox.
Related papers
- Rolling the dice for better deep learning performance: A study of randomness techniques in deep neural networks [4.643954670642798]
This paper investigates how various randomization techniques are used in Deep Neural Networks (DNNs)
It categorizes techniques into four types: adding noise to the loss function, masking random gradient updates, data augmentation and weight generalization.
The complete implementation and dataset are available on GitHub.
arXiv Detail & Related papers (2024-04-05T10:02:32Z) - MomentDiff: Generative Video Moment Retrieval from Random to Real [71.40038773943638]
We provide a generative diffusion-based framework called MomentDiff.
MomentDiff simulates a typical human retrieval process from random browsing to gradual localization.
We show that MomentDiff consistently outperforms state-of-the-art methods on three public benchmarks.
arXiv Detail & Related papers (2023-07-06T09:12:13Z) - Machine Learning needs Better Randomness Standards: Randomised Smoothing
and PRNG-based attacks [14.496582479888765]
We consider whether attackers can compromise an machine learning system using only the randomness on which they commonly rely.
We demonstrate an entirely novel attack, where an attacker backdoors the supplied randomness to falsely certify either an overestimate or an underestimate of robustness for up to 81 times.
We advocate updating the NIST guidelines on random number testing to make them more appropriate for safety-critical and security-critical machine-learning applications.
arXiv Detail & Related papers (2023-06-24T19:50:08Z) - Learning Classifiers of Prototypes and Reciprocal Points for Universal
Domain Adaptation [79.62038105814658]
Universal Domain aims to transfer the knowledge between datasets by handling two shifts: domain-shift and categoryshift.
Main challenge is correctly distinguishing the unknown target samples while adapting the distribution of known class knowledge from source to target.
Most existing methods approach this problem by first training the target adapted known and then relying on the single threshold to distinguish unknown target samples.
arXiv Detail & Related papers (2022-12-16T09:01:57Z) - Dual Lottery Ticket Hypothesis [71.95937879869334]
Lottery Ticket Hypothesis (LTH) provides a novel view to investigate sparse network training and maintain its capacity.
In this work, we regard the winning ticket from LTH as the subnetwork which is in trainable condition and its performance as our benchmark.
We propose a simple sparse network training strategy, Random Sparse Network Transformation (RST), to substantiate our DLTH.
arXiv Detail & Related papers (2022-03-08T18:06:26Z) - Improved, Deterministic Smoothing for L1 Certified Robustness [119.86676998327864]
We propose a non-additive and deterministic smoothing method, Deterministic Smoothing with Splitting Noise (DSSN)
In contrast to uniform additive smoothing, the SSN certification does not require the random noise components used to be independent.
This is the first work to provide deterministic "randomized smoothing" for a norm-based adversarial threat model.
arXiv Detail & Related papers (2021-03-17T21:49:53Z) - On the robustness of randomized classifiers to adversarial examples [11.359085303200981]
We introduce a new notion of robustness for randomized classifiers, enforcing local Lipschitzness using probability metrics.
We show that our results are applicable to a wide range of machine learning models under mild hypotheses.
All robust models we trained models can simultaneously achieve state-of-the-art accuracy.
arXiv Detail & Related papers (2021-02-22T10:16:58Z) - Probabilistic Anchor Assignment with IoU Prediction for Object Detection [9.703212439661097]
In object detection, determining which anchors to assign as positive or negative samples, known as anchor assignment, has been revealed as a core procedure that can significantly affect a model's performance.
We propose a novel anchor assignment strategy that adaptively separates anchors into positive and negative samples for a ground truth bounding box according to the model's learning status.
arXiv Detail & Related papers (2020-07-16T04:26:57Z) - RAIN: A Simple Approach for Robust and Accurate Image Classification
Networks [156.09526491791772]
It has been shown that the majority of existing adversarial defense methods achieve robustness at the cost of sacrificing prediction accuracy.
This paper proposes a novel preprocessing framework, which we term Robust and Accurate Image classificatioN(RAIN)
RAIN applies randomization over inputs to break the ties between the model forward prediction path and the backward gradient path, thus improving the model robustness.
We conduct extensive experiments on the STL10 and ImageNet datasets to verify the effectiveness of RAIN against various types of adversarial attacks.
arXiv Detail & Related papers (2020-04-24T02:03:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.