A Novel Inspection System For Variable Data Printing Using Deep Learning
- URL: http://arxiv.org/abs/2001.04325v1
- Date: Mon, 13 Jan 2020 15:07:13 GMT
- Title: A Novel Inspection System For Variable Data Printing Using Deep Learning
- Authors: Oren Haik, Oded Perry, Eli Chen, Peter Klammer
- Abstract summary: We present a novel approach for inspecting variable data prints (VDP) with an ultra-low false alarm rate (0.005%)
The system is based on a comparison between two images: a reference image and an image captured by low-cost scanners.
- Score: 0.9176056742068814
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel approach for inspecting variable data prints (VDP) with an
ultra-low false alarm rate (0.005%) and potential applicability to other
real-world problems. The system is based on a comparison between two images: a
reference image and an image captured by low-cost scanners. The comparison task
is challenging as low-cost imaging systems create artifacts that may
erroneously be classified as true (genuine) defects. To address this challenge
we introduce two new fusion methods, for change detection applications, which
are both fast and efficient. The first is an early fusion method that combines
the two input images into a single pseudo-color image. The second, called
Change-Detection Single Shot Detector (CD-SSD) leverages the SSD by fusing
features in the middle of the network. We demonstrate the effectiveness of the
proposed deep learning-based approach with a large dataset from real-world
printing scenarios. Finally, we evaluate our models on a different domain of
aerial imagery change detection (AICD). Our best method clearly outperforms the
state-of-the-art baseline on this dataset.
Related papers
- Robust CLIP-Based Detector for Exposing Diffusion Model-Generated Images [13.089550724738436]
Diffusion models (DMs) have revolutionized image generation, producing high-quality images with applications spanning various fields.
Their ability to create hyper-realistic images poses significant challenges in distinguishing between real and synthetic content.
This work introduces a robust detection framework that integrates image and text features extracted by CLIP model with a Multilayer Perceptron (MLP) classifier.
arXiv Detail & Related papers (2024-04-19T14:30:41Z) - UCDFormer: Unsupervised Change Detection Using a Transformer-driven
Image Translation [20.131754484570454]
Change detection (CD) by comparing two bi-temporal images is a crucial task in remote sensing.
We propose a change detection with domain shift setting for remote sensing images.
We present a novel unsupervised CD method using a light-weight transformer, called UCDFormer.
arXiv Detail & Related papers (2023-08-02T13:39:08Z) - Parents and Children: Distinguishing Multimodal DeepFakes from Natural Images [60.34381768479834]
Recent advancements in diffusion models have enabled the generation of realistic deepfakes from textual prompts in natural language.
We pioneer a systematic study on deepfake detection generated by state-of-the-art diffusion models.
arXiv Detail & Related papers (2023-04-02T10:25:09Z) - Semi-Siamese Network for Robust Change Detection Across Different
Domains with Applications to 3D Printing [17.176767333354636]
We present a novel Semi-Siamese deep learning model for defect detection in 3D printing processes.
Our model is designed to enable comparison of heterogeneous images from different domains while being robust against perturbations in the imaging setup.
Using our model, defect localization predictions can be made in less than half a second per layer using a standard MacBook Pro while achieving an F1-score of more than 0.9.
arXiv Detail & Related papers (2022-12-16T17:02:55Z) - Revisiting Consistency Regularization for Semi-supervised Change
Detection in Remote Sensing Images [60.89777029184023]
We propose a semi-supervised CD model in which we formulate an unsupervised CD loss in addition to the supervised Cross-Entropy (CE) loss.
Experiments conducted on two publicly available CD datasets show that the proposed semi-supervised CD method can reach closer to the performance of supervised CD.
arXiv Detail & Related papers (2022-04-18T17:59:01Z) - Target-aware Dual Adversarial Learning and a Multi-scenario
Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection [65.30079184700755]
This study addresses the issue of fusing infrared and visible images that appear differently for object detection.
Previous approaches discover commons underlying the two modalities and fuse upon the common space either by iterative optimization or deep networks.
This paper proposes a bilevel optimization formulation for the joint problem of fusion and detection, and then unrolls to a target-aware Dual Adversarial Learning (TarDAL) network for fusion and a commonly used detection network.
arXiv Detail & Related papers (2022-03-30T11:44:56Z) - AnoDFDNet: A Deep Feature Difference Network for Anomaly Detection [6.508649912734565]
We propose a novel anomaly detection (AD) approach of High-speed Train images based on convolutional neural networks and the Vision Transformer.
The proposed method detects abnormal difference between two images taken at different times of the same region.
arXiv Detail & Related papers (2022-03-29T02:24:58Z) - A Hierarchical Transformation-Discriminating Generative Model for Few
Shot Anomaly Detection [93.38607559281601]
We devise a hierarchical generative model that captures the multi-scale patch distribution of each training image.
The anomaly score is obtained by aggregating the patch-based votes of the correct transformation across scales and image regions.
arXiv Detail & Related papers (2021-04-29T17:49:48Z) - M2TR: Multi-modal Multi-scale Transformers for Deepfake Detection [74.19291916812921]
forged images generated by Deepfake techniques pose a serious threat to the trustworthiness of digital information.
In this paper, we aim to capture the subtle manipulation artifacts at different scales for Deepfake detection.
We introduce a high-quality Deepfake dataset, SR-DF, which consists of 4,000 DeepFake videos generated by state-of-the-art face swapping and facial reenactment methods.
arXiv Detail & Related papers (2021-04-20T05:43:44Z) - D-Unet: A Dual-encoder U-Net for Image Splicing Forgery Detection and
Localization [108.8592577019391]
Image splicing forgery detection is a global binary classification task that distinguishes the tampered and non-tampered regions by image fingerprints.
We propose a novel network called dual-encoder U-Net (D-Unet) for image splicing forgery detection, which employs an unfixed encoder and a fixed encoder.
In an experimental comparison study of D-Unet and state-of-the-art methods, D-Unet outperformed the other methods in image-level and pixel-level detection.
arXiv Detail & Related papers (2020-12-03T10:54:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.