Few-Shot Object Detection in Real Life: Case Study on Auto-Harvest
- URL: http://arxiv.org/abs/2011.02719v1
- Date: Thu, 5 Nov 2020 09:24:33 GMT
- Title: Few-Shot Object Detection in Real Life: Case Study on Auto-Harvest
- Authors: Kevin Riou, Jingwen Zhu, Suiyi Ling, Mathis Piquet, Vincent Truffault,
Patrick Le Callet
- Abstract summary: Confinement during COVID-19 has caused serious effects on agriculture all over the world.
Within the auto-harvest system, robust few-shot object detection model is one of the bottlenecks.
We present a novel cucumber dataset and propose two data augmentation strategies that help to bridge the context-gap.
- Score: 26.119977579845614
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Confinement during COVID-19 has caused serious effects on agriculture all
over the world. As one of the efficient solutions, mechanical
harvest/auto-harvest that is based on object detection and robotic harvester
becomes an urgent need. Within the auto-harvest system, robust few-shot object
detection model is one of the bottlenecks, since the system is required to deal
with new vegetable/fruit categories and the collection of large-scale annotated
datasets for all the novel categories is expensive. There are many few-shot
object detection models that were developed by the community. Yet whether they
could be employed directly for real life agricultural applications is still
questionable, as there is a context-gap between the commonly used training
datasets and the images collected in real life agricultural scenarios. To this
end, in this study, we present a novel cucumber dataset and propose two data
augmentation strategies that help to bridge the context-gap. Experimental
results show that 1) the state-of-the-art few-shot object detection model
performs poorly on the novel `cucumber' category; and 2) the proposed
augmentation strategies outperform the commonly used ones.
Related papers
- Visual Context-Aware Person Fall Detection [52.49277799455569]
We present a segmentation pipeline to semi-automatically separate individuals and objects in images.
Background objects such as beds, chairs, or wheelchairs can challenge fall detection systems, leading to false positive alarms.
We demonstrate that object-specific contextual transformations during training effectively mitigate this challenge.
arXiv Detail & Related papers (2024-04-11T19:06:36Z) - Transfer learning with generative models for object detection on limited datasets [1.4999444543328293]
In some fields, such as marine biology, it is necessary to have correctly labeled bounding boxes around each object.
We propose a transfer learning framework that is valid for a generic scenario.
Our results pave the way for new generative AI-based protocols for machine learning applications in various domains.
arXiv Detail & Related papers (2024-02-09T21:17:31Z) - CrowdSim2: an Open Synthetic Benchmark for Object Detectors [0.7223361655030193]
This paper presents and publicly releases CrowdSim2, a new synthetic collection of images suitable for people and vehicle detection.
It consists of thousands of images gathered from various synthetic scenarios resembling the real world, where we varied some factors of interest.
We exploited this new benchmark as a testing ground for some state-of-the-art detectors, showing that our simulated scenarios can be a valuable tool for measuring their performances in a controlled environment.
arXiv Detail & Related papers (2023-04-11T09:35:57Z) - Generalized Few-Shot 3D Object Detection of LiDAR Point Cloud for
Autonomous Driving [91.39625612027386]
We propose a novel task, called generalized few-shot 3D object detection, where we have a large amount of training data for common (base) objects, but only a few data for rare (novel) classes.
Specifically, we analyze in-depth differences between images and point clouds, and then present a practical principle for the few-shot setting in the 3D LiDAR dataset.
To solve this task, we propose an incremental fine-tuning method to extend existing 3D detection models to recognize both common and rare objects.
arXiv Detail & Related papers (2023-02-08T07:11:36Z) - Neural-Sim: Learning to Generate Training Data with NeRF [31.81496344354997]
We present the first fully differentiable synthetic data pipeline that uses Neural Radiance Fields (NeRFs) in a closed-loop with a target application's loss function.
Our approach generates data on-demand, with no human labor, to maximize accuracy for a target task.
arXiv Detail & Related papers (2022-07-22T22:48:33Z) - CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of
Adversarial Robustness of Vision Models [61.68061613161187]
This paper presents CARLA-GeAR, a tool for the automatic generation of synthetic datasets for evaluating the robustness of neural models against physical adversarial patches.
The tool is built on the CARLA simulator, using its Python API, and allows the generation of datasets for several vision tasks in the context of autonomous driving.
The paper presents an experimental study to evaluate the performance of some defense methods against such attacks, showing how the datasets generated with CARLA-GeAR might be used in future work as a benchmark for adversarial defense in the real world.
arXiv Detail & Related papers (2022-06-09T09:17:38Z) - Dataset and Performance Comparison of Deep Learning Architectures for
Plum Detection and Robotic Harvesting [3.7692411550925673]
Two new datasets are gathered during day and night operation of an actual robotic plum harvesting system.
A range of current generation deep learning object detectors are benchmarked against these.
Two methods for fusing depth and image information are tested for their impact on detector performance.
arXiv Detail & Related papers (2021-05-09T04:18:58Z) - Few-shot Weakly-Supervised Object Detection via Directional Statistics [55.97230224399744]
We propose a probabilistic multiple instance learning approach for few-shot Common Object Localization (COL) and few-shot Weakly Supervised Object Detection (WSOD)
Our model simultaneously learns the distribution of the novel objects and localizes them via expectation-maximization steps.
Our experiments show that the proposed method, despite being simple, outperforms strong baselines in few-shot COL and WSOD, as well as large-scale WSOD tasks.
arXiv Detail & Related papers (2021-03-25T22:34:16Z) - Closing the Generalization Gap in One-Shot Object Detection [92.82028853413516]
We show that the key to strong few-shot detection models may not lie in sophisticated metric learning approaches, but instead in scaling the number of categories.
Future data annotation efforts should therefore focus on wider datasets and annotate a larger number of categories.
arXiv Detail & Related papers (2020-11-09T09:31:17Z) - Any-Shot Object Detection [81.88153407655334]
'Any-shot detection' is where totally unseen and few-shot categories can simultaneously co-occur during inference.
We propose a unified any-shot detection model, that can concurrently learn to detect both zero-shot and few-shot object classes.
Our framework can also be used solely for Zero-shot detection and Few-shot detection tasks.
arXiv Detail & Related papers (2020-03-16T03:43:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.