Text-Guided Variational Image Generation for Industrial Anomaly Detection and Segmentation
- URL: http://arxiv.org/abs/2403.06247v2
- Date: Tue, 26 Mar 2024 14:42:21 GMT
- Title: Text-Guided Variational Image Generation for Industrial Anomaly Detection and Segmentation
- Authors: Mingyu Lee, Jongwon Choi,
- Abstract summary: We propose a text-guided variational image generation method to address the challenge of getting clean data for anomaly detection in industrial manufacturing.
Our method utilizes text information about the target object, learned from extensive text library documents, to generate non-defective data images resembling the input image.
- Score: 6.861600385661363
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a text-guided variational image generation method to address the challenge of getting clean data for anomaly detection in industrial manufacturing. Our method utilizes text information about the target object, learned from extensive text library documents, to generate non-defective data images resembling the input image. The proposed framework ensures that the generated non-defective images align with anticipated distributions derived from textual and image-based knowledge, ensuring stability and generality. Experimental results demonstrate the effectiveness of our approach, surpassing previous methods even with limited non-defective data. Our approach is validated through generalization tests across four baseline models and three distinct datasets. We present an additional analysis to enhance the effectiveness of anomaly detection models by utilizing the generated images.
Related papers
- Diffusion-Enhanced Test-time Adaptation with Text and Image Augmentation [67.37146712877794]
IT3A is a novel test-time adaptation method that utilizes a pre-trained generative model for multi-modal augmentation of each test sample from unknown new domains.
By combining augmented data from pre-trained vision and language models, we enhance the ability of the model to adapt to unknown new test data.
In a zero-shot setting, IT3A outperforms state-of-the-art test-time prompt tuning methods with a 5.50% increase in accuracy.
arXiv Detail & Related papers (2024-12-12T20:01:24Z) - Automatic Prompt Generation and Grounding Object Detection for Zero-Shot Image Anomaly Detection [17.06832015516288]
We propose a zero-shot training-free approach for automated industrial image anomaly detection using a multimodal machine learning pipeline.
Our proposed model enables efficient, scalable, and objective quality control in industrial manufacturing settings.
arXiv Detail & Related papers (2024-11-28T15:42:32Z) - Understanding and Improving Training-Free AI-Generated Image Detections with Vision Foundation Models [68.90917438865078]
Deepfake techniques for facial synthesis and editing pose serious risks for generative models.
In this paper, we investigate how detection performance varies across model backbones, types, and datasets.
We introduce Contrastive Blur, which enhances performance on facial images, and MINDER, which addresses noise type bias, balancing performance across domains.
arXiv Detail & Related papers (2024-11-28T13:04:45Z) - Research on Splicing Image Detection Algorithms Based on Natural Image Statistical Characteristics [12.315852697312195]
This paper introduces a new splicing image detection algorithm based on the statistical characteristics of natural images.
By analyzing the limitations of traditional methods, we have developed a detection framework that integrates advanced statistical analysis techniques and machine learning methods.
The algorithm has been validated using multiple public datasets, showing high accuracy in detecting spliced edges and locating tampered areas.
arXiv Detail & Related papers (2024-04-25T02:28:16Z) - Detecting Generated Images by Real Images Only [64.12501227493765]
Existing generated image detection methods detect visual artifacts in generated images or learn discriminative features from both real and generated images by massive training.
This paper approaches the generated image detection problem from a new perspective: Start from real images.
By finding the commonality of real images and mapping them to a dense subspace in feature space, the goal is that generated images, regardless of their generative model, are then projected outside the subspace.
arXiv Detail & Related papers (2023-11-02T03:09:37Z) - Improving Diversity in Zero-Shot GAN Adaptation with Semantic Variations [61.132408427908175]
zero-shot GAN adaptation aims to reuse well-trained generators to synthesize images of an unseen target domain.
With only a single representative text feature instead of real images, the synthesized images gradually lose diversity.
We propose a novel method to find semantic variations of the target text in the CLIP space.
arXiv Detail & Related papers (2023-08-21T08:12:28Z) - Free-ATM: Exploring Unsupervised Learning on Diffusion-Generated Images
with Free Attention Masks [64.67735676127208]
Text-to-image diffusion models have shown great potential for benefiting image recognition.
Although promising, there has been inadequate exploration dedicated to unsupervised learning on diffusion-generated images.
We introduce customized solutions by fully exploiting the aforementioned free attention masks.
arXiv Detail & Related papers (2023-08-13T10:07:46Z) - Training on Thin Air: Improve Image Classification with Generated Data [28.96941414724037]
Diffusion Inversion is a simple yet effective method to generate diverse, high-quality training data for image classification.
Our approach captures the original data distribution and ensures data coverage by inverting images to the latent space of Stable Diffusion.
We identify three key components that allow our generated images to successfully supplant the original dataset.
arXiv Detail & Related papers (2023-05-24T16:33:02Z) - Taming Encoder for Zero Fine-tuning Image Customization with
Text-to-Image Diffusion Models [55.04969603431266]
This paper proposes a method for generating images of customized objects specified by users.
The method is based on a general framework that bypasses the lengthy optimization required by previous approaches.
We demonstrate through experiments that our proposed method is able to synthesize images with compelling output quality, appearance diversity, and object fidelity.
arXiv Detail & Related papers (2023-04-05T17:59:32Z) - Benchmarking performance of object detection under image distortions in
an uncontrolled environment [0.483420384410068]
robustness of object detection algorithms plays a prominent role in real-world applications.
It has been proven that the performance of object detection methods suffers from in-capture distortions.
We present a performance evaluation framework for the state-of-the-art object detection methods.
arXiv Detail & Related papers (2022-10-28T09:06:52Z) - Transformation Consistency Regularization- A Semi-Supervised Paradigm
for Image-to-Image Translation [18.870983535180457]
We propose Transformation Consistency Regularization, which delves into a more challenging setting of image-to-image translation.
We evaluate the efficacy of our algorithm on three different applications: image colorization, denoising and super-resolution.
Our method is significantly data efficient, requiring only around 10 - 20% of labeled samples to achieve similar image reconstructions to its fully-supervised counterpart.
arXiv Detail & Related papers (2020-07-15T17:41:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.