Imprinto: Enhancing Infrared Inkjet Watermarking for Human and Machine Perception
- URL: http://arxiv.org/abs/2502.17089v1
- Date: Mon, 24 Feb 2025 12:11:33 GMT
- Title: Imprinto: Enhancing Infrared Inkjet Watermarking for Human and Machine Perception
- Authors: Martin Feick, Xuxin Tang, Raul Garcia-Martin, Alexandru Luchianov, Roderick Wei Xiao Huang, Chang Xiao, Alexa Siu, Mustafa Doga Dogan,
- Abstract summary: Hybrid paper interfaces leverage augmented reality to combine the desired tangibility of paper documents with the affordances of interactive digital media.<n>We present Imprinto, an infrared inkjet watermarking technique that allows for invisible content embeddings only by using off-the-shelf IR inks and a camera.
- Score: 45.46101893448141
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Hybrid paper interfaces leverage augmented reality to combine the desired tangibility of paper documents with the affordances of interactive digital media. Typically, virtual content can be embedded through direct links (e.g., QR codes); however, this impacts the aesthetics of the paper print and limits the available visual content space. To address this problem, we present Imprinto, an infrared inkjet watermarking technique that allows for invisible content embeddings only by using off-the-shelf IR inks and a camera. Imprinto was established through a psychophysical experiment, studying how much IR ink can be used while remaining invisible to users regardless of background color. We demonstrate that we can detect invisible IR content through our machine learning pipeline, and we developed an authoring tool that optimizes the amount of IR ink on the color regions of an input document for machine and human detectability. Finally, we demonstrate several applications, including augmenting paper documents and objects.
Related papers
- Multi-Domain Biometric Recognition using Body Embeddings [51.36007967653781]
We show that body embeddings perform better than face embeddings in medium-wave infrared (MWIR) and long-wave infrared (LWIR) domains.
We leverage a vision transformer architecture to establish benchmark results on the IJB-MDF dataset.
We also show that finetuning a body model, pretrained exclusively on VIS data, with a simple combination of cross-entropy and triplet losses achieves state-of-the-art mAP scores.
arXiv Detail & Related papers (2025-03-13T22:38:18Z) - Infrared and Visible Image Fusion: From Data Compatibility to Task Adaption [65.06388526722186]
Infrared-visible image fusion is a critical task in computer vision.
There is a lack of recent comprehensive surveys that address this rapidly expanding domain.
We introduce a multi-dimensional framework to elucidate common learning-based IVIF methods.
arXiv Detail & Related papers (2025-01-18T13:17:34Z) - See then Tell: Enhancing Key Information Extraction with Vision Grounding [54.061203106565706]
We introduce STNet (See then Tell Net), a novel end-to-end model designed to deliver precise answers with relevant vision grounding.
To enhance the model's seeing capabilities, we collect extensive structured table recognition datasets.
arXiv Detail & Related papers (2024-09-29T06:21:05Z) - Classification of Inkjet Printers based on Droplet Statistics [1.237454174824584]
Knowing the printer model used to print a given document may provide a crucial lead towards identifying counterfeits or verifying the validity of a real document.
We investigate the utilization of droplet characteristics including frequency domain features extracted from printed document scans for the classification of the underlying printer model.
arXiv Detail & Related papers (2024-06-26T10:20:01Z) - Volumetric Fast Fourier Convolution for Detecting Ink on the Carbonized
Herculaneum Papyri [23.090618261864886]
We propose a modification of the Fast Fourier Convolution operator for volumetric data and apply it in a segmentation architecture for ink detection on the Herculaneum papyri.
To encourage the research on this task and the application of the proposed operator to other tasks involving volumetric data, we will release our implementation.
arXiv Detail & Related papers (2023-08-09T17:00:43Z) - Name Your Colour For the Task: Artificially Discover Colour Naming via
Colour Quantisation Transformer [62.75343115345667]
We propose a novel colour quantisation transformer, CQFormer, that quantises colour space while maintaining machine recognition on the quantised images.
We observe the consistent evolution pattern between our artificial colour system and basic colour terms across human languages.
Our colour quantisation method also offers an efficient quantisation method that effectively compresses the image storage.
arXiv Detail & Related papers (2022-12-07T03:39:18Z) - InfraredTags: Embedding Invisible AR Markers and Barcodes Using
Low-Cost, Infrared-Based 3D Printing and Imaging Tools [0.0]
We present InfraredTags, which are 2D markers and barcodes imperceptible to the naked eye that can be 3D printed as part of objects.
We achieve this by printing objects from an infrared-transmitting filament, which infrared cameras can see through.
We built a user interface that facilitates the integration of common tags with the object geometry to make them 3D printable as InfraredTags.
arXiv Detail & Related papers (2022-02-12T23:45:18Z) - MobileSal: Extremely Efficient RGB-D Salient Object Detection [62.04876251927581]
This paper introduces a novel network, methodname, which focuses on efficient RGB-D salient object detection (SOD)
We propose an implicit depth restoration (IDR) technique to strengthen the feature representation capability of mobile networks for RGB-D SOD.
With IDR and CPR incorporated, methodnameperforms favorably against sArt methods on seven challenging RGB-D SOD datasets.
arXiv Detail & Related papers (2020-12-24T04:36:42Z) - Source Printer Identification from Document Images Acquired using
Smartphone [14.889347839830092]
We propose to learn a single CNN model from the fusion of letter images and their printer-specific noise residuals.
The proposed method achieves 98.42% document classification accuracy using images of letter 'e' under a 5x2 cross-validation approach.
arXiv Detail & Related papers (2020-03-27T18:59:32Z) - Domain Adversarial Training for Infrared-colour Person Re-Identification [19.852463786440122]
Person re-identification (re-ID) is a very active area of research in computer vision.
Most methods only address the task of matching between colour images.
In poorly-lit environments CCTV cameras switch to infrared imaging.
We propose a part-feature extraction network to better focus on subtle, unique signatures on the person.
arXiv Detail & Related papers (2020-03-09T15:17:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.