A New High-Performance Approach to Approximate Pattern-Matching for
Plagiarism Detection in Blockchain-Based Non-Fungible Tokens (NFTs)
- URL: http://arxiv.org/abs/2205.14492v1
- Date: Sat, 28 May 2022 17:53:20 GMT
- Title: A New High-Performance Approach to Approximate Pattern-Matching for
Plagiarism Detection in Blockchain-Based Non-Fungible Tokens (NFTs)
- Authors: Ciprian Pungila, Darius Galis, Viorel Negru
- Abstract summary: We present a fast and innovative approach to performing approximate pattern-matching for plagiarism detection using an NDFA-based approach.
We outline the advantages of our approach in the context of blockchain-based non-fungible tokens (NFTs)
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We are presenting a fast and innovative approach to performing approximate
pattern-matching for plagiarism detection, using an NDFA-based approach that
significantly enhances performance compared to other existing similarity
measures. We outline the advantages of our approach in the context of
blockchain-based non-fungible tokens (NFTs). We present, formalize, discuss and
test our proposed approach in several real-world scenarios and with different
similarity measures commonly used in plagiarism detection, and observe
significant throughput enhancements throughout the entire spectrum of tests,
with little to no compromises on the accuracy of the detection process overall.
We conclude that our approach is suitable and adequate to perform approximate
pattern-matching for plagiarism detection, and outline research directions for
future improvements.
Related papers
- Token-Level Adversarial Prompt Detection Based on Perplexity Measures
and Contextual Information [67.78183175605761]
Large Language Models are susceptible to adversarial prompt attacks.
This vulnerability underscores a significant concern regarding the robustness and reliability of LLMs.
We introduce a novel approach to detecting adversarial prompts at a token level.
arXiv Detail & Related papers (2023-11-20T03:17:21Z) - A Minimax Approach Against Multi-Armed Adversarial Attacks Detection [31.971443221041174]
Multi-armed adversarial attacks have been shown to be highly successful in fooling state-of-the-art detectors.
We propose a solution that aggregates the soft-probability outputs of multiple pre-trained detectors according to a minimax approach.
We show that our aggregation consistently outperforms individual state-of-the-art detectors against multi-armed adversarial attacks.
arXiv Detail & Related papers (2023-02-04T18:21:22Z) - Rethinking Clustering-Based Pseudo-Labeling for Unsupervised
Meta-Learning [146.11600461034746]
Method for unsupervised meta-learning, CACTUs, is a clustering-based approach with pseudo-labeling.
This approach is model-agnostic and can be combined with supervised algorithms to learn from unlabeled data.
We prove that the core reason for this is lack of a clustering-friendly property in the embedding space.
arXiv Detail & Related papers (2022-09-27T19:04:36Z) - Demystifying Unsupervised Semantic Correspondence Estimation [13.060538447838303]
We explore semantic correspondence estimation through the lens of unsupervised learning.
We thoroughly evaluate several recently proposed unsupervised methods across multiple challenging datasets.
We introduce a new unsupervised correspondence approach which utilizes the strength of pre-trained features while encouraging better matches during training.
arXiv Detail & Related papers (2022-07-11T17:59:51Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - A Low Rank Promoting Prior for Unsupervised Contrastive Learning [108.91406719395417]
We construct a novel probabilistic graphical model that effectively incorporates the low rank promoting prior into the framework of contrastive learning.
Our hypothesis explicitly requires that all the samples belonging to the same instance class lie on the same subspace with small dimension.
Empirical evidences show that the proposed algorithm clearly surpasses the state-of-the-art approaches on multiple benchmarks.
arXiv Detail & Related papers (2021-08-05T15:58:25Z) - Revisiting The Evaluation of Class Activation Mapping for
Explainability: A Novel Metric and Experimental Analysis [54.94682858474711]
Class Activation Mapping (CAM) approaches provide an effective visualization by taking weighted averages of the activation maps.
We propose a novel set of metrics to quantify explanation maps, which show better effectiveness and simplify comparisons between approaches.
arXiv Detail & Related papers (2021-04-20T21:34:24Z) - Uncertainty Surrogates for Deep Learning [17.868995105624023]
We introduce a novel way of estimating prediction uncertainty in deep networks through the use of uncertainty surrogates.
These surrogates are features of the penultimate layer of a deep network that are forced to match predefined patterns.
We show how our approach can be used for estimating uncertainty in prediction and out-of-distribution detection.
arXiv Detail & Related papers (2021-04-16T14:50:28Z) - Incremental Verification of Fixed-Point Implementations of Neural
Networks [0.19573380763700707]
We develop and evaluate a novel symbolic verification framework using incremental bounded model checking (BMC), satisfiability modulo theories (SMT), and invariant inference.
Our approach was able to verify and produce examples for 85.8% of 21 test cases considering different input images, and 100% of the properties related to covering methods.
arXiv Detail & Related papers (2020-12-21T10:03:44Z) - CIMON: Towards High-quality Hash Codes [63.37321228830102]
We propose a new method named textbfComprehensive stextbfImilarity textbfMining and ctextbfOnsistency leartextbfNing (CIMON)
First, we use global refinement and similarity statistical distribution to obtain reliable and smooth guidance. Second, both semantic and contrastive consistency learning are introduced to derive both disturb-invariant and discriminative hash codes.
arXiv Detail & Related papers (2020-10-15T14:47:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.