Advanced Efficient Strategy for Detection of Dark Objects Based on
Spiking Network with Multi-Box Detection
- URL: http://arxiv.org/abs/2310.06370v1
- Date: Tue, 10 Oct 2023 07:20:37 GMT
- Title: Advanced Efficient Strategy for Detection of Dark Objects Based on
Spiking Network with Multi-Box Detection
- Authors: Munawar Ali, Baoqun Yin, Hazrat Bilal, Aakash Kumar, Ali Muhammad,
Avinash Rohra
- Abstract summary: The study proposes a combination of spiked and normal convolution layers as an energy-efficient and reliable object detector model.
With state of the art Python libraries, spike layers can be trained efficiently.
The proposed spike convolutional object detector (SCOD) has been evaluated on VOC and Ex-Dark datasets.
- Score: 2.9659663708260777
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Several deep learning algorithms have shown amazing performance for existing
object detection tasks, but recognizing darker objects is the largest
challenge. Moreover, those techniques struggled to detect or had a slow
recognition rate, resulting in significant performance losses. As a result, an
improved and accurate detection approach is required to address the above
difficulty. The whole study proposes a combination of spiked and normal
convolution layers as an energy-efficient and reliable object detector model.
The proposed model is split into two sections. The first section is developed
as a feature extractor, which utilizes pre-trained VGG16, and the second
section of the proposal structure is the combination of spiked and normal
Convolutional layers to detect the bounding boxes of images. We drew a
pre-trained model for classifying detected objects. With state of the art
Python libraries, spike layers can be trained efficiently. The proposed spike
convolutional object detector (SCOD) has been evaluated on VOC and Ex-Dark
datasets. SCOD reached 66.01% and 41.25% mAP for detecting 20 different objects
in the VOC-12 and 12 objects in the Ex-Dark dataset. SCOD uses 14 Giga FLOPS
for its forward path calculations. Experimental results indicated superior
performance compared to Tiny YOLO, Spike YOLO, YOLO-LITE, Tinier YOLO and
Center of loc+Xception based on mAP for the VOC dataset.
Related papers
- SOOD++: Leveraging Unlabeled Data to Boost Oriented Object Detection [59.868772767818975]
We propose a simple yet effective Semi-supervised Oriented Object Detection method termed SOOD++.
Specifically, we observe that objects from aerial images are usually arbitrary orientations, small scales, and aggregation.
Extensive experiments conducted on various multi-oriented object datasets under various labeled settings demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2024-07-01T07:03:51Z) - SARDet-100K: Towards Open-Source Benchmark and ToolKit for Large-Scale SAR Object Detection [79.23689506129733]
We establish a new benchmark dataset and an open-source method for large-scale SAR object detection.
Our dataset, SARDet-100K, is a result of intense surveying, collecting, and standardizing 10 existing SAR detection datasets.
To the best of our knowledge, SARDet-100K is the first COCO-level large-scale multi-class SAR object detection dataset ever created.
arXiv Detail & Related papers (2024-03-11T09:20:40Z) - Improved Region Proposal Network for Enhanced Few-Shot Object Detection [23.871860648919593]
Few-shot object detection (FSOD) methods have emerged as a solution to the limitations of classic object detection approaches.
We develop a semi-supervised algorithm to detect and then utilize unlabeled novel objects as positive samples during the FSOD training stage.
Our improved hierarchical sampling strategy for the region proposal network (RPN) also boosts the perception of the object detection model for large objects.
arXiv Detail & Related papers (2023-08-15T02:35:59Z) - PE-YOLO: Pyramid Enhancement Network for Dark Object Detection [9.949687351946038]
We propose a pyramid enhanced network (PENet) and joint it with YOLOv3 to build a dark object detection framework named PE-YOLO.
PE-YOLO adopts an end-to-end joint training approach and only uses normal detection loss to simplify the training process.
Results: PE-YOLO achieves 78.0% in mAP and 53.6 in FPS, respectively, which can adapt to object detection under different low-light conditions.
arXiv Detail & Related papers (2023-07-20T15:25:55Z) - USD: Unknown Sensitive Detector Empowered by Decoupled Objectness and
Segment Anything Model [14.080744645704751]
Open World Object Detection (OWOD) is a novel and challenging computer vision task.
We propose a simple yet effective learning strategy, namely Decoupled Objectness Learning (DOL), which divides the learning of these two boundaries into decoder layers.
We also introduce an Auxiliary Supervision Framework (ASF) that uses a pseudo-labeling and a soft-weighting strategies to alleviate the negative impact of noise.
arXiv Detail & Related papers (2023-06-04T06:42:09Z) - Long Range Object-Level Monocular Depth Estimation for UAVs [0.0]
We propose several novel extensions to state-of-the-art methods for monocular object detection from images at long range.
Firstly, we propose Sigmoid and ReLU-like encodings when modeling depth estimation as a regression task.
Secondly, we frame the depth estimation as a classification problem and introduce a Soft-Argmax function in the calculation of the training loss.
arXiv Detail & Related papers (2023-02-17T15:26:04Z) - EAutoDet: Efficient Architecture Search for Object Detection [110.99532343155073]
EAutoDet framework can discover practical backbone and FPN architectures for object detection in 1.4 GPU-days.
We propose a kernel reusing technique by sharing the weights of candidate operations on one edge and consolidating them into one convolution.
In particular, the discovered architectures surpass state-of-the-art object detection NAS methods and achieve 40.1 mAP with 120 FPS and 49.2 mAP with 41.3 FPS on COCO test-dev set.
arXiv Detail & Related papers (2022-03-21T05:56:12Z) - Benchmarking Deep Models for Salient Object Detection [67.07247772280212]
We construct a general SALient Object Detection (SALOD) benchmark to conduct a comprehensive comparison among several representative SOD methods.
In the above experiments, we find that existing loss functions usually specialized in some metrics but reported inferior results on the others.
We propose a novel Edge-Aware (EA) loss that promotes deep networks to learn more discriminative features by integrating both pixel- and image-level supervision signals.
arXiv Detail & Related papers (2022-02-07T03:43:16Z) - Seeing BDD100K in dark: Single-Stage Night-time Object Detection via
Continual Fourier Contrastive Learning [3.4012007729454816]
Night-time object detection has been studied only sparsely, that too, via non-uniform evaluation protocols among the limited available papers.
In this paper, we bridge these 3 gaps:.
Lack of an uniform evaluation protocol (using a single-stage detector, due to its efficacy, and efficiency);.
A choice of dataset for benchmarking night-time object detection, and.
A novel method to address the limitations of current alternatives.
arXiv Detail & Related papers (2021-12-06T09:28:45Z) - EDN: Salient Object Detection via Extremely-Downsampled Network [66.38046176176017]
We introduce an Extremely-Downsampled Network (EDN), which employs an extreme downsampling technique to effectively learn a global view of the whole image.
Experiments demonstrate that EDN achieves sArt performance with real-time speed.
arXiv Detail & Related papers (2020-12-24T04:23:48Z) - One-Shot Object Detection without Fine-Tuning [62.39210447209698]
We introduce a two-stage model consisting of a first stage Matching-FCOS network and a second stage Structure-Aware Relation Module.
We also propose novel training strategies that effectively improve detection performance.
Our method exceeds the state-of-the-art one-shot performance consistently on multiple datasets.
arXiv Detail & Related papers (2020-05-08T01:59:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.