Shuffler: A Large Scale Data Management Tool for ML in Computer Vision
- URL: http://arxiv.org/abs/2104.05125v1
- Date: Sun, 11 Apr 2021 22:27:28 GMT
- Title: Shuffler: A Large Scale Data Management Tool for ML in Computer Vision
- Authors: Evgeny Toropov, Paola A. Buitrago, Jose M. F. Moura
- Abstract summary: We present Shuffler, an open source tool that makes it easy to manage large computer vision datasets.
Shuffler defines over 40 data handling operations with annotations that are commonly useful in supervised learning applied to computer vision.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Datasets in the computer vision academic research community are primarily
static. Once a dataset is accepted as a benchmark for a computer vision task,
researchers working on this task will not alter it in order to make their
results reproducible. At the same time, when exploring new tasks and new
applications, datasets tend to be an ever changing entity. A practitioner may
combine existing public datasets, filter images or objects in them, change
annotations or add new ones to fit a task at hand, visualize sample images, or
perhaps output statistics in the form of text or plots. In fact, datasets
change as practitioners experiment with data as much as with algorithms, trying
to make the most out of machine learning models. Given that ML and deep
learning call for large volumes of data to produce satisfactory results, it is
no surprise that the resulting data and software management associated to
dealing with live datasets can be quite complex. As far as we know, there is no
flexible, publicly available instrument to facilitate manipulating image data
and their annotations throughout a ML pipeline. In this work, we present
Shuffler, an open source tool that makes it easy to manage large computer
vision datasets. It stores annotations in a relational, human-readable
database. Shuffler defines over 40 data handling operations with annotations
that are commonly useful in supervised learning applied to computer vision and
supports some of the most well-known computer vision datasets. Finally, it is
easily extensible, making the addition of new operations and datasets a task
that is fast and easy to accomplish.
Related papers
- Web-Scale Visual Entity Recognition: An LLM-Driven Data Approach [56.55633052479446]
Web-scale visual entity recognition presents significant challenges due to the lack of clean, large-scale training data.
We propose a novel methodology to curate such a dataset, leveraging a multimodal large language model (LLM) for label verification, metadata generation, and rationale explanation.
Experiments demonstrate that models trained on this automatically curated data achieve state-of-the-art performance on web-scale visual entity recognition tasks.
arXiv Detail & Related papers (2024-10-31T06:55:24Z) - AvaTaR: Optimizing LLM Agents for Tool Usage via Contrastive Reasoning [93.96463520716759]
Large language model (LLM) agents have demonstrated impressive capabilities in utilizing external tools and knowledge to boost accuracy and hallucinations.
Here, we introduce AvaTaR, a novel and automated framework that optimize an LLM agent to effectively leverage provided tools, improving performance on a given task.
arXiv Detail & Related papers (2024-06-17T04:20:02Z) - Using Large Language Models to Generate Engaging Captions for Data
Visualizations [51.98253121636079]
Large language models (LLM) use sophisticated deep learning technology to produce human-like prose.
Key challenge lies in designing the most effective prompt for the LLM, a task called prompt engineering.
We report on first experiments using the popular LLM GPT-3 and deliver some promising results.
arXiv Detail & Related papers (2022-12-27T23:56:57Z) - Masked autoencoders are effective solution to transformer data-hungry [0.0]
Vision Transformers (ViTs) outperforms convolutional neural networks (CNNs) in several vision tasks with its global modeling capabilities.
ViT lacks the inductive bias inherent to convolution making it require a large amount of data for training.
Masked autoencoders (MAE) can make the transformer focus more on the image itself.
arXiv Detail & Related papers (2022-12-12T03:15:19Z) - NoisyActions2M: A Multimedia Dataset for Video Understanding from Noisy
Labels [33.659146748289444]
We create a benchmark dataset consisting of around 2 million videos with associated user-generated annotations and other meta information.
We show how a network pretrained on the proposed dataset can help against video corruption and label noise in downstream datasets.
arXiv Detail & Related papers (2021-10-13T16:12:18Z) - Do Datasets Have Politics? Disciplinary Values in Computer Vision
Dataset Development [6.182409582844314]
We collect a corpus of about 500 computer vision datasets, from which we sampled 114 dataset publications across different vision tasks.
We discuss how computer vision datasets authors value efficiency at the expense of care; universality at the expense of contextuality; and model work at the expense of data work.
We conclude with suggestions on how to better incorporate silenced values into the dataset creation and curation process.
arXiv Detail & Related papers (2021-08-09T19:07:58Z) - On The State of Data In Computer Vision: Human Annotations Remain
Indispensable for Developing Deep Learning Models [0.0]
High-quality labeled datasets play a crucial role in fueling the development of machine learning (ML)
Since the emergence of the ImageNet dataset and the AlexNet model in 2012, the size of new open-source labeled vision datasets has remained roughly constant.
Only a minority of publications in the computer vision community tackle supervised learning on datasets that are orders of magnitude larger than Imagenet.
arXiv Detail & Related papers (2021-07-31T00:08:21Z) - REGRAD: A Large-Scale Relational Grasp Dataset for Safe and
Object-Specific Robotic Grasping in Clutter [52.117388513480435]
We present a new dataset named regrad to sustain the modeling of relationships among objects and grasps.
Our dataset is collected in both forms of 2D images and 3D point clouds.
Users are free to import their own object models for the generation of as many data as they want.
arXiv Detail & Related papers (2021-04-29T05:31:21Z) - Visual Imitation Made Easy [102.36509665008732]
We present an alternate interface for imitation that simplifies the data collection process while allowing for easy transfer to robots.
We use commercially available reacher-grabber assistive tools both as a data collection device and as the robot's end-effector.
We experimentally evaluate on two challenging tasks: non-prehensile pushing and prehensile stacking, with 1000 diverse demonstrations for each task.
arXiv Detail & Related papers (2020-08-11T17:58:50Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.