InfraredTags: Embedding Invisible AR Markers and Barcodes Using
Low-Cost, Infrared-Based 3D Printing and Imaging Tools
- URL: http://arxiv.org/abs/2202.06165v1
- Date: Sat, 12 Feb 2022 23:45:18 GMT
- Title: InfraredTags: Embedding Invisible AR Markers and Barcodes Using
Low-Cost, Infrared-Based 3D Printing and Imaging Tools
- Authors: Mustafa Doga Dogan (1), Ahmad Taka (1), Michael Lu (1), Yunyi Zhu (1),
Akshat Kumar (1), Aakar Gupta (2), Stefanie Mueller (1) ((1) MIT CSAIL,
Cambridge, MA, USA, (2) Facebook Reality Labs, Redmond, WA, USA)
- Abstract summary: We present InfraredTags, which are 2D markers and barcodes imperceptible to the naked eye that can be 3D printed as part of objects.
We achieve this by printing objects from an infrared-transmitting filament, which infrared cameras can see through.
We built a user interface that facilitates the integration of common tags with the object geometry to make them 3D printable as InfraredTags.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing approaches for embedding unobtrusive tags inside 3D objects require
either complex fabrication or high-cost imaging equipment. We present
InfraredTags, which are 2D markers and barcodes imperceptible to the naked eye
that can be 3D printed as part of objects, and detected rapidly by low-cost
near-infrared cameras. We achieve this by printing objects from an
infrared-transmitting filament, which infrared cameras can see through, and by
having air gaps inside for the tag's bits, which appear at a different
intensity in the infrared image.
We built a user interface that facilitates the integration of common tags (QR
codes, ArUco markers) with the object geometry to make them 3D printable as
InfraredTags. We also developed a low-cost infrared imaging module that
augments existing mobile devices and decodes tags using our image processing
pipeline. Our evaluation shows that the tags can be detected with little
near-infrared illumination (0.2lux) and from distances as far as 250cm. We
demonstrate how our method enables various applications, such as object
tracking and embedding metadata for augmented reality and tangible
interactions.
Related papers
- Multistream Network for LiDAR and Camera-based 3D Object Detection in Outdoor Scenes [59.78696921486972]
Fusion of LiDAR and RGB data has the potential to enhance outdoor 3D object detection accuracy.<n>We propose a MultiStream Detection (MuStD) network, that meticulously extracts task-relevant information from both data modalities.
arXiv Detail & Related papers (2025-07-25T14:20:16Z) - IRSAM: Advancing Segment Anything Model for Infrared Small Target Detection [55.554484379021524]
Infrared Small Target Detection (IRSTD) task falls short in achieving satisfying performance due to a notable domain gap between natural and infrared images.
We propose the IRSAM model for IRSTD, which improves SAM's encoder-decoder architecture to learn better feature representation of infrared small objects.
arXiv Detail & Related papers (2024-07-10T10:17:57Z) - Multi-Modal 3D Object Detection by Box Matching [109.43430123791684]
We propose a novel Fusion network by Box Matching (FBMNet) for multi-modal 3D detection.
With the learned assignments between 3D and 2D object proposals, the fusion for detection can be effectively performed by combing their ROI features.
arXiv Detail & Related papers (2023-05-12T18:08:51Z) - ImLiDAR: Cross-Sensor Dynamic Message Propagation Network for 3D Object
Detection [20.44294678711783]
We propose ImLiDAR, a new 3OD paradigm to narrow the cross-sensor discrepancies by progressively fusing the multi-scale features of camera Images and LiDAR point clouds.
First, we propose a cross-sensor dynamic message propagation module to combine the best of the multi-scale image and point features.
Second, we raise a direct set prediction problem that allows designing an effective set-based detector.
arXiv Detail & Related papers (2022-11-17T13:31:23Z) - RF-Annotate: Automatic RF-Supervised Image Annotation of Common Objects
in Context [0.25019493958767397]
Wireless tags are increasingly used to track and identify common items of interest such as retail goods, food, medicine, clothing, books, documents, keys, equipment, and more.
We present RF-Annotate, a pipeline for autonomous pixel-wise image annotation which enables robots to collect labelled visual data of objects of interest as they encounter them within their environment.
arXiv Detail & Related papers (2022-11-16T11:25:38Z) - Drone Detection and Tracking in Real-Time by Fusion of Different Sensing
Modalities [66.4525391417921]
We design and evaluate a multi-sensor drone detection system.
Our solution integrates a fish-eye camera as well to monitor a wider part of the sky and steer the other cameras towards objects of interest.
The thermal camera is shown to be a feasible solution as good as the video camera, even if the camera employed here has a lower resolution.
arXiv Detail & Related papers (2022-07-05T10:00:58Z) - Robust Environment Perception for Automated Driving: A Unified Learning
Pipeline for Visual-Infrared Object Detection [2.478658210785]
We exploit both visual and thermal perception units for robust object detection purposes.
In this paper, we exploit both visual and thermal perception units for robust object detection purposes.
arXiv Detail & Related papers (2022-06-08T15:02:58Z) - Drone Object Detection Using RGB/IR Fusion [1.5469452301122175]
We develop strategies for creating synthetic IR images using the AIRSim simulation engine and CycleGAN.
We utilize an illumination-aware fusion framework to fuse RGB and IR images for object detection on the ground.
Our solution is implemented on an NVIDIA Jetson Xavier running on an actual drone, requiring about 28 milliseconds of processing per RGB/IR image pair.
arXiv Detail & Related papers (2022-01-11T05:15:59Z) - Infrared Small-Dim Target Detection with Transformer under Complex
Backgrounds [155.388487263872]
We propose a new infrared small-dim target detection method with the transformer.
We adopt the self-attention mechanism of the transformer to learn the interaction information of image features in a larger range.
We also design a feature enhancement module to learn more features of small-dim targets.
arXiv Detail & Related papers (2021-09-29T12:23:41Z) - EagerMOT: 3D Multi-Object Tracking via Sensor Fusion [68.8204255655161]
Multi-object tracking (MOT) enables mobile robots to perform well-informed motion planning and navigation by localizing surrounding objects in 3D space and time.
Existing methods rely on depth sensors (e.g., LiDAR) to detect and track targets in 3D space, but only up to a limited sensing range due to the sparsity of the signal.
We propose EagerMOT, a simple tracking formulation that integrates all available object observations from both sensor modalities to obtain a well-informed interpretation of the scene dynamics.
arXiv Detail & Related papers (2021-04-29T22:30:29Z) - Deep Continuous Fusion for Multi-Sensor 3D Object Detection [103.5060007382646]
We propose a novel 3D object detector that can exploit both LIDAR as well as cameras to perform very accurate localization.
We design an end-to-end learnable architecture that exploits continuous convolutions to fuse image and LIDAR feature maps at different levels of resolution.
arXiv Detail & Related papers (2020-12-20T18:43:41Z) - 3D Fusion of Infrared Images with Dense RGB Reconstruction from Multiple
Views -- with Application to Fire-fighting Robots [1.9420928933791046]
This project integrates infrared and RGB imagery to produce dense 3D environment models reconstructed from multiple views.
The resulting 3D map contains both thermal and RGB information which can be used in robotic fire-fighting applications to identify victims and active fire areas.
arXiv Detail & Related papers (2020-07-29T05:19:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.