E2ETag: An End-to-End Trainable Method for Generating and Detecting
Fiducial Markers
- URL: http://arxiv.org/abs/2105.14184v1
- Date: Sat, 29 May 2021 03:13:14 GMT
- Title: E2ETag: An End-to-End Trainable Method for Generating and Detecting
Fiducial Markers
- Authors: J. Brennan Peace, Eric Psota, Yanfeng Liu, Lance C. P\'erez
- Abstract summary: E2ETag is an end-to-end trainable method for designing fiducial markers and a complimentary detector.
It learns to generate markers that can be detected and classified in challenging real-world environments using a fully convolutional detector network.
Results demonstrate that E2ETag outperforms existing methods in ideal conditions.
- Score: 0.8602553195689513
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing fiducial markers solutions are designed for efficient detection and
decoding, however, their ability to stand out in natural environments is
difficult to infer from relatively limited analysis. Furthermore, worsening
performance in challenging image capture scenarios - such as poor exposure,
motion blur, and off-axis viewing - sheds light on their limitations. E2ETag
introduces an end-to-end trainable method for designing fiducial markers and a
complimentary detector. By introducing back-propagatable marker augmentation
and superimposition into training, the method learns to generate markers that
can be detected and classified in challenging real-world environments using a
fully convolutional detector network. Results demonstrate that E2ETag
outperforms existing methods in ideal conditions and performs much better in
the presence of motion blur, contrast fluctuations, noise, and off-axis viewing
angles. Source code and trained models are available at
https://github.com/jbpeace/E2ETag.
Related papers
- YoloTag: Vision-based Robust UAV Navigation with Fiducial Markers [2.7855886538423182]
We propose YoloTag, a real-time fiducial marker-based localization system.
YoloTag uses a lightweight YOLO v8 object detector to accurately detect fiducial markers in images.
The detected markers are then used by an efficient perspective-n-point algorithm to estimate UAV states.
arXiv Detail & Related papers (2024-09-03T23:42:19Z) - Learning Camouflaged Object Detection from Noisy Pseudo Label [60.9005578956798]
This paper introduces the first weakly semi-supervised Camouflaged Object Detection (COD) method.
It aims for budget-efficient and high-precision camouflaged object segmentation with an extremely limited number of fully labeled images.
We propose a noise correction loss that facilitates the model's learning of correct pixels in the early learning stage.
When using only 20% of fully labeled data, our method shows superior performance over the state-of-the-art methods.
arXiv Detail & Related papers (2024-07-18T04:53:51Z) - View Consistent Purification for Accurate Cross-View Localization [59.48131378244399]
This paper proposes a fine-grained self-localization method for outdoor robotics.
The proposed method addresses limitations in existing cross-view localization methods.
It is the first sparse visual-only method that enhances perception in dynamic environments.
arXiv Detail & Related papers (2023-08-16T02:51:52Z) - DeepFormableTag: End-to-end Generation and Recognition of Deformable
Fiducial Markers [27.135078472097895]
Existing detection methods assume that markers are printed on ideally planar surfaces.
A fiducial marker generator creates a set of free-form color patterns to encode significantly large-scale information.
A differentiable image simulator creates a training dataset of photorealistic scene images with the deformed markers.
A trained marker detector seeks the regions of interest and recognizes multiple marker patterns simultaneously.
arXiv Detail & Related papers (2022-06-16T09:29:26Z) - Unsupervised Domain Adaptive Salient Object Detection Through
Uncertainty-Aware Pseudo-Label Learning [104.00026716576546]
We propose to learn saliency from synthetic but clean labels, which naturally has higher pixel-labeling quality without the effort of manual annotations.
We show that our proposed method outperforms the existing state-of-the-art deep unsupervised SOD methods on several benchmark datasets.
arXiv Detail & Related papers (2022-02-26T16:03:55Z) - Activation to Saliency: Forming High-Quality Labels for Unsupervised
Salient Object Detection [54.92703325989853]
We propose a two-stage Activation-to-Saliency (A2S) framework that effectively generates high-quality saliency cues.
No human annotations are involved in our framework during the whole training process.
Our framework reports significant performance compared with existing USOD methods.
arXiv Detail & Related papers (2021-12-07T11:54:06Z) - Application of Ghost-DeblurGAN to Fiducial Marker Detection [1.1470070927586016]
This paper develops a lightweight generative adversarial network, named Ghost-DeGAN, for real-time motion deblurring.
A new large-scale dataset, YorkTag, is proposed that provides pairs of sharp/blurred images containing fiducial markers.
With the proposed model trained and tested on YorkTag, it is demonstrated that when applied along with fiducial marker systems to motion-blurred images, Ghost-DeblurGAN improves the marker detection significantly.
arXiv Detail & Related papers (2021-09-08T00:59:10Z) - DeepTag: A General Framework for Fiducial Marker Design and Detection [1.2180122937388957]
We propose a general deep learning based framework, DeepTag, for fiducial marker design and detection.
DeepTag supports detection of a wide variety of existing marker families and makes it possible to design new marker families with customized local patterns.
Experiments show that DeepTag well supports different marker families and greatly outperforms the existing methods in terms of both detection robustness and pose accuracy.
arXiv Detail & Related papers (2021-05-28T10:54:59Z) - Dense Label Encoding for Boundary Discontinuity Free Rotation Detection [69.75559390700887]
This paper explores a relatively less-studied methodology based on classification.
We propose new techniques to push its frontier in two aspects.
Experiments and visual analysis on large-scale public datasets for aerial images show the effectiveness of our approach.
arXiv Detail & Related papers (2020-11-19T05:42:02Z) - EHSOD: CAM-Guided End-to-end Hybrid-Supervised Object Detection with
Cascade Refinement [53.69674636044927]
We present EHSOD, an end-to-end hybrid-supervised object detection system.
It can be trained in one shot on both fully and weakly-annotated data.
It achieves comparable results on multiple object detection benchmarks with only 30% fully-annotated data.
arXiv Detail & Related papers (2020-02-18T08:04:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.