Inferring Offensiveness In Images From Natural Language Supervision
- URL: http://arxiv.org/abs/2110.04222v1
- Date: Fri, 8 Oct 2021 16:19:21 GMT
- Title: Inferring Offensiveness In Images From Natural Language Supervision
- Authors: Patrick Schramowski, Kristian Kersting
- Abstract summary: Large image datasets automatically scraped from the web may contain derogatory terms as categories and offensive images.
We show that pre-trained transformers themselves provide a methodology for the automated curation of large-scale vision datasets.
- Score: 20.294073012815854
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Probing or fine-tuning (large-scale) pre-trained models results in
state-of-the-art performance for many NLP tasks and, more recently, even for
computer vision tasks when combined with image data. Unfortunately, these
approaches also entail severe risks. In particular, large image datasets
automatically scraped from the web may contain derogatory terms as categories
and offensive images, and may also underrepresent specific classes.
Consequently, there is an urgent need to carefully document datasets and curate
their content. Unfortunately, this process is tedious and error-prone. We show
that pre-trained transformers themselves provide a methodology for the
automated curation of large-scale vision datasets. Based on human-annotated
examples and the implicit knowledge of a CLIP based model, we demonstrate that
one can select relevant prompts for rating the offensiveness of an image. In
addition to e.g. privacy violation and pornographic content previously
identified in ImageNet, we demonstrate that our approach identifies further
inappropriate and potentially offensive content.
Related papers
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension [99.9389737339175]
We introduce Self-Training on Image (STIC), which emphasizes a self-training approach specifically for image comprehension.
First, the model self-constructs a preference for image descriptions using unlabeled images.
To further self-improve reasoning on the extracted visual information, we let the model reuse a small portion of existing instruction-tuning data.
arXiv Detail & Related papers (2024-05-30T05:53:49Z) - Evaluating Data Attribution for Text-to-Image Models [62.844382063780365]
We evaluate attribution through "customization" methods, which tune an existing large-scale model toward a given exemplar object or style.
Our key insight is that this allows us to efficiently create synthetic images that are computationally influenced by the exemplar by construction.
By taking into account the inherent uncertainty of the problem, we can assign soft attribution scores over a set of training images.
arXiv Detail & Related papers (2023-06-15T17:59:51Z) - ClipCrop: Conditioned Cropping Driven by Vision-Language Model [90.95403416150724]
We take advantage of vision-language models as a foundation for creating robust and user-intentional cropping algorithms.
We develop a method to perform cropping with a text or image query that reflects the user's intention as guidance.
Our pipeline design allows the model to learn text-conditioned aesthetic cropping with a small dataset.
arXiv Detail & Related papers (2022-11-21T14:27:07Z) - Exploring CLIP for Assessing the Look and Feel of Images [87.97623543523858]
We introduce Contrastive Language-Image Pre-training (CLIP) models for assessing both the quality perception (look) and abstract perception (feel) of images in a zero-shot manner.
Our results show that CLIP captures meaningful priors that generalize well to different perceptual assessments.
arXiv Detail & Related papers (2022-07-25T17:58:16Z) - Vision Models Are More Robust And Fair When Pretrained On Uncurated
Images Without Supervision [38.22842778742829]
Discriminative self-supervised learning allows training models on any random group of internet images.
We train models on billions of random images without any data pre-processing or prior assumptions about what we want the model to learn.
We extensively study and validate our model performance on over 50 benchmarks including fairness, to distribution shift, geographical diversity, fine grained recognition, image copy detection and many image classification datasets.
arXiv Detail & Related papers (2022-02-16T22:26:47Z) - Improving Fractal Pre-training [0.76146285961466]
We propose an improved pre-training dataset based on dynamically-generated fractal images.
Our experiments demonstrate that fine-tuning a network pre-trained using fractals attains 92.7-98.1% of the accuracy of an ImageNet pre-trained network.
arXiv Detail & Related papers (2021-10-06T22:39:51Z) - Data-driven Meta-set Based Fine-Grained Visual Classification [61.083706396575295]
We propose a data-driven meta-set based approach to deal with noisy web images for fine-grained recognition.
Specifically, guided by a small amount of clean meta-set, we train a selection net in a meta-learning manner to distinguish in- and out-of-distribution noisy images.
arXiv Detail & Related papers (2020-08-06T03:04:16Z) - Learning Representations by Predicting Bags of Visual Words [55.332200948110895]
Self-supervised representation learning targets to learn convnet-based image representations from unlabeled data.
Inspired by the success of NLP methods in this area, in this work we propose a self-supervised approach based on spatially dense image descriptions.
arXiv Detail & Related papers (2020-02-27T16:45:25Z) - Privacy-Preserving Image Classification in the Local Setting [17.375582978294105]
Local Differential Privacy (LDP) brings us a promising solution, which allows the data owners to randomly perturb their input to provide the plausible deniability of the data before releasing.
In this paper, we consider a two-party image classification problem, in which data owners hold the image and the untrustworthy data user would like to fit a machine learning model with these images as input.
We propose a supervised image feature extractor, DCAConv, which produces an image representation with scalable domain size.
arXiv Detail & Related papers (2020-02-09T01:25:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.