Enhancing Object Detection Performance for Small Objects through
Synthetic Data Generation and Proportional Class-Balancing Technique: A
Comparative Study in Industrial Scenarios
- URL: http://arxiv.org/abs/2401.12729v2
- Date: Mon, 29 Jan 2024 13:18:18 GMT
- Title: Enhancing Object Detection Performance for Small Objects through
Synthetic Data Generation and Proportional Class-Balancing Technique: A
Comparative Study in Industrial Scenarios
- Authors: Jibinraj Antony and Vinit Hegiste and Ali Nazeri and Hooman Tavakoli
and Snehal Walunj and Christiane Plociennik and Martin Ruskowski
- Abstract summary: Object Detection (OD) has proven to be a significant computer vision method in extracting localized class information.
Many of the state-of-the-art (SOTA) OD models perform well on medium and large sized objects, but under perform on small objects.
This study presents a novel approach that injects additional data points to improve the performance of the OD models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Object Detection (OD) has proven to be a significant computer vision method
in extracting localized class information and has multiple applications in the
industry. Although many of the state-of-the-art (SOTA) OD models perform well
on medium and large sized objects, they seem to under perform on small objects.
In most of the industrial use cases, it is difficult to collect and annotate
data for small objects, as it is time-consuming and prone to human errors.
Additionally, those datasets are likely to be unbalanced and often result in an
inefficient model convergence. To tackle this challenge, this study presents a
novel approach that injects additional data points to improve the performance
of the OD models. Using synthetic data generation, the difficulties in data
collection and annotations for small object data points can be minimized and to
create a dataset with balanced distribution. This paper discusses the effects
of a simple proportional class-balancing technique, to enable better anchor
matching of the OD models. A comparison was carried out on the performances of
the SOTA OD models: YOLOv5, YOLOv7 and SSD, for combinations of real and
synthetic datasets within an industrial use case.
Related papers
- Towards Robust Universal Information Extraction: Benchmark, Evaluation, and Solution [66.11004226578771]
Existing robust benchmark datasets have two key limitations.
They generate only a limited range of perturbations for a single Information Extraction (IE) task.
Considering the powerful generation capabilities of Large Language Models (LLMs), we introduce a new benchmark dataset for Robust UIE, called RUIE-Bench.
We show that training with only textbf15% of the data leads to an average textbf7.5% relative performance improvement across three IE tasks.
arXiv Detail & Related papers (2025-03-05T05:39:29Z) - FRIDA to the Rescue! Analyzing Synthetic Data Effectiveness in Object-Based Common Sense Reasoning for Disaster Response [19.744969357182665]
We introduce a dataset and pipeline to create Field Reasoning and Instruction Decoding Agent (FRIDA) models.<n>In our pipeline, domain experts and linguists combine their knowledge to make high-quality few-shot prompts.<n>We fine-tune several small instruction-tuned models and find that ablated FRIDA models only trained on objects' physical state and function data.
arXiv Detail & Related papers (2025-02-25T18:51:06Z) - Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration [90.41908331897639]
Large language models (LLMs) have significantly benefited from training on diverse, high-quality task-specific data.
We present a novel approach, ReverseGen, designed to automatically generate effective training samples.
arXiv Detail & Related papers (2024-10-22T06:43:28Z) - Unveiling the Flaws: Exploring Imperfections in Synthetic Data and Mitigation Strategies for Large Language Models [89.88010750772413]
Synthetic data has been proposed as a solution to address the issue of high-quality data scarcity in the training of large language models (LLMs)
Our work delves into these specific flaws associated with question-answer (Q-A) pairs, a prevalent type of synthetic data, and presents a method based on unlearning techniques to mitigate these flaws.
Our work has yielded key insights into the effective use of synthetic data, aiming to promote more robust and efficient LLM training.
arXiv Detail & Related papers (2024-06-18T08:38:59Z) - Let's Synthesize Step by Step: Iterative Dataset Synthesis with Large
Language Models by Extrapolating Errors from Small Models [69.76066070227452]
*Data Synthesis* is a promising way to train a small model with very little labeled data.
We propose *Synthesis Step by Step* (**S3**), a data synthesis framework that shrinks this distribution gap.
Our approach improves the performance of a small model by reducing the gap between the synthetic dataset and the real data.
arXiv Detail & Related papers (2023-10-20T17:14:25Z) - Synthetic Data Generation with Large Language Models for Text
Classification: Potential and Limitations [21.583825474908334]
We study how the performance of models trained on synthetic data may vary with the subjectivity of classification.
Our results indicate that subjectivity, at both the task level and instance level, is negatively associated with the performance of the model trained on synthetic data.
arXiv Detail & Related papers (2023-10-11T19:51:13Z) - Building Manufacturing Deep Learning Models with Minimal and Imbalanced
Training Data Using Domain Adaptation and Data Augmentation [15.333573151694576]
We propose a novel domain adaptation (DA) approach to address the problem of labeled training data scarcity for a target learning task.
Our approach works for scenarios where the source dataset and the dataset available for the target learning task have same or different feature spaces.
We evaluate our combined approach using image data for wafer defect prediction.
arXiv Detail & Related papers (2023-05-31T21:45:34Z) - Synthetic data, real errors: how (not) to publish and use synthetic data [86.65594304109567]
We show how the generative process affects the downstream ML task.
We introduce Deep Generative Ensemble (DGE) to approximate the posterior distribution over the generative process model parameters.
arXiv Detail & Related papers (2023-05-16T07:30:29Z) - Temporal Output Discrepancy for Loss Estimation-based Active Learning [65.93767110342502]
We present a novel deep active learning approach that queries the oracle for data annotation when the unlabeled sample is believed to incorporate high loss.
Our approach achieves superior performances than the state-of-the-art active learning methods on image classification and semantic segmentation tasks.
arXiv Detail & Related papers (2022-12-20T19:29:37Z) - Effective Few-Shot Named Entity Linking by Meta-Learning [34.70028855572534]
We propose a novel weak supervision strategy to generate non-trivial synthetic entity-mention pairs.
We also design a meta-learning mechanism to assign different weights to each synthetic entity-mention pair automatically.
Experiments on real-world datasets show that the proposed method can extensively improve the state-of-the-art few-shot entity linking model.
arXiv Detail & Related papers (2022-07-12T03:23:02Z) - Contrastive Model Inversion for Data-Free Knowledge Distillation [60.08025054715192]
We propose Contrastive Model Inversion, where the data diversity is explicitly modeled as an optimizable objective.
Our main observation is that, under the constraint of the same amount of data, higher data diversity usually indicates stronger instance discrimination.
Experiments on CIFAR-10, CIFAR-100, and Tiny-ImageNet demonstrate that CMI achieves significantly superior performance when the generated data are used for knowledge distillation.
arXiv Detail & Related papers (2021-05-18T15:13:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.