Multi-Point Positional Insertion Tuning for Small Object Detection
- URL: http://arxiv.org/abs/2412.18090v1
- Date: Tue, 24 Dec 2024 02:04:47 GMT
- Title: Multi-Point Positional Insertion Tuning for Small Object Detection
- Authors: Kanoko Goto, Takumi Karasawa, Takumi Hirose, Rei Kawakami, Nakamasa Inoue,
- Abstract summary: Small object detection aims to localize and classify small objects within images.
Finetuning pretrained object detection models is computationally and memory expensive.
This paper introduces multi-point positional insertion (MPI) tuning, a parameter-efficient finetuning (PEFT) method for small object detection.
- Score: 10.852047082856487
- License:
- Abstract: Small object detection aims to localize and classify small objects within images. With recent advances in large-scale vision-language pretraining, finetuning pretrained object detection models has emerged as a promising approach. However, finetuning large models is computationally and memory expensive. To address this issue, this paper introduces multi-point positional insertion (MPI) tuning, a parameter-efficient finetuning (PEFT) method for small object detection. Specifically, MPI incorporates multiple positional embeddings into a frozen pretrained model, enabling the efficient detection of small objects by providing precise positional information to latent features. Through experiments, we demonstrated the effectiveness of the proposed method on the SODA-D dataset. MPI performed comparably to conventional PEFT methods, including CoOp and VPT, while significantly reducing the number of parameters that need to be tuned.
Related papers
- Efficient Oriented Object Detection with Enhanced Small Object Recognition in Aerial Images [2.9138705529771123]
We present a novel enhancement to the YOLOv8 model, tailored for oriented object detection tasks.
Our model features a wavelet transform-based C2f module for capturing associative features and an Adaptive Scale Feature Pyramid (ASFP) module that leverages P2 layer details.
Our approach provides a more efficient architectural design than DecoupleNet, which has 23.3M parameters, all while maintaining detection accuracy.
arXiv Detail & Related papers (2024-12-17T05:45:48Z) - Oriented Tiny Object Detection: A Dataset, Benchmark, and Dynamic Unbiased Learning [51.170479006249195]
We introduce a new dataset, benchmark, and a dynamic coarse-to-fine learning scheme in this study.
Our proposed dataset, AI-TOD-R, features the smallest object sizes among all oriented object detection datasets.
We present a benchmark spanning a broad range of detection paradigms, including both fully-supervised and label-efficient approaches.
arXiv Detail & Related papers (2024-12-16T09:14:32Z) - Boost UAV-based Ojbect Detection via Scale-Invariant Feature Disentanglement and Adversarial Learning [18.11107031800982]
We propose to improve single-stage inference accuracy through learning scale-invariant features.
Our approach can effectively improve model accuracy and achieve state-of-the-art (SoTA) performance on two datasets.
arXiv Detail & Related papers (2024-05-24T11:40:22Z) - Visible and Clear: Finding Tiny Objects in Difference Map [50.54061010335082]
We introduce a self-reconstruction mechanism in the detection model, and discover the strong correlation between it and the tiny objects.
Specifically, we impose a reconstruction head in-between the neck of a detector, constructing a difference map of the reconstructed image and the input, which shows high sensitivity to tiny objects.
We further develop a Difference Map Guided Feature Enhancement (DGFE) module to make the tiny feature representation more clear.
arXiv Detail & Related papers (2024-05-18T12:22:26Z) - Small Object Detection by DETR via Information Augmentation and Adaptive
Feature Fusion [4.9860018132769985]
The RT-DETR model performs well in real-time object detection, but performs poorly in small object detection accuracy.
We propose an adaptive feature fusion algorithm that assigns learnable parameters to each feature map from different levels.
This enhances the model's ability to capture object features at different scales, thereby improving the accuracy of detecting small objects.
arXiv Detail & Related papers (2024-01-16T00:01:23Z) - Dynamic Tiling: A Model-Agnostic, Adaptive, Scalable, and
Inference-Data-Centric Approach for Efficient and Accurate Small Object
Detection [3.8332251841430423]
Dynamic Tiling is a model-agnostic, adaptive, and scalable approach for small object detection.
Our method effectively resolves fragmented objects, improves detection accuracy, and minimizes computational overhead.
Overall, Dynamic Tiling outperforms existing model-agnostic uniform cropping methods.
arXiv Detail & Related papers (2023-09-20T05:25:12Z) - Small Object Detection via Coarse-to-fine Proposal Generation and
Imitation Learning [52.06176253457522]
We propose a two-stage framework tailored for small object detection based on the Coarse-to-fine pipeline and Feature Imitation learning.
CFINet achieves state-of-the-art performance on the large-scale small object detection benchmarks, SODA-D and SODA-A.
arXiv Detail & Related papers (2023-08-18T13:13:09Z) - Towards Efficient Use of Multi-Scale Features in Transformer-Based
Object Detectors [49.83396285177385]
Multi-scale features have been proven highly effective for object detection but often come with huge and even prohibitive extra computation costs.
We propose Iterative Multi-scale Feature Aggregation (IMFA) -- a generic paradigm that enables efficient use of multi-scale features in Transformer-based object detectors.
arXiv Detail & Related papers (2022-08-24T08:09:25Z) - Dynamic Proposals for Efficient Object Detection [48.66093789652899]
We propose a simple yet effective method which is adaptive to different computational resources by generating dynamic proposals for object detection.
Our method achieves significant speed-up across a wide range of detection models including two-stage and query-based models.
arXiv Detail & Related papers (2022-07-12T01:32:50Z) - Plug-and-Play Few-shot Object Detection with Meta Strategy and Explicit
Localization Inference [78.41932738265345]
This paper proposes a plug detector that can accurately detect the objects of novel categories without fine-tuning process.
We introduce two explicit inferences into the localization process to reduce its dependence on annotated data.
It shows a significant lead in both efficiency, precision, and recall under varied evaluation protocols.
arXiv Detail & Related papers (2021-10-26T03:09:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.