Artificial Text Detection via Examining the Topology of Attention Maps
- URL: http://arxiv.org/abs/2109.04825v1
- Date: Fri, 10 Sep 2021 12:13:45 GMT
- Title: Artificial Text Detection via Examining the Topology of Attention Maps
- Authors: Laida Kushnareva, Daniil Cherniavskii, Vladislav Mikhailov, Ekaterina
Artemova, Serguei Barannikov, Alexander Bernstein, Irina Piontkovskaya,
Dmitri Piontkovski, Evgeny Burnaev
- Abstract summary: We propose three novel types of interpretable topological features for this task based on Topological Data Analysis (TDA)
We empirically show that the features derived from the BERT model outperform count- and neural-based baselines up to 10% on three common datasets.
The probing analysis of the features reveals their sensitivity to the surface and syntactic properties.
- Score: 58.46367297712477
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The impressive capabilities of recent generative models to create texts that
are challenging to distinguish from the human-written ones can be misused for
generating fake news, product reviews, and even abusive content. Despite the
prominent performance of existing methods for artificial text detection, they
still lack interpretability and robustness towards unseen models. To this end,
we propose three novel types of interpretable topological features for this
task based on Topological Data Analysis (TDA) which is currently understudied
in the field of NLP. We empirically show that the features derived from the
BERT model outperform count- and neural-based baselines up to 10\% on three
common datasets, and tend to be the most robust towards unseen GPT-style
generation models as opposed to existing methods. The probing analysis of the
features reveals their sensitivity to the surface and syntactic properties. The
results demonstrate that TDA is a promising line with respect to NLP tasks,
specifically the ones that incorporate surface and structural information.
Related papers
- Detecting Machine-Generated Long-Form Content with Latent-Space Variables [54.07946647012579]
Existing zero-shot detectors primarily focus on token-level distributions, which are vulnerable to real-world domain shifts.
We propose a more robust method that incorporates abstract elements, such as event transitions, as key deciding factors to detect machine versus human texts.
arXiv Detail & Related papers (2024-10-04T18:42:09Z) - How Hard is this Test Set? NLI Characterization by Exploiting Training Dynamics [49.9329723199239]
We propose a method for the automated creation of a challenging test set without relying on the manual construction of artificial and unrealistic examples.
We categorize the test set of popular NLI datasets into three difficulty levels by leveraging methods that exploit training dynamics.
When our characterization method is applied to the training set, models trained with only a fraction of the data achieve comparable performance to those trained on the full dataset.
arXiv Detail & Related papers (2024-10-04T13:39:21Z) - Challenges and Opportunities in Text Generation Explainability [12.089513278445704]
This paper outlines 17 challenges categorized into three groups that arise during the development and assessment of explainability methods.
These challenges encompass issues concerning tokenization, defining explanation similarity, determining token importance and prediction change metrics, the level of human intervention required, and the creation of suitable test datasets.
The paper illustrates how these challenges can be intertwined, showcasing new opportunities for the community.
arXiv Detail & Related papers (2024-05-14T09:44:52Z) - Towards Robustness of Text-to-Visualization Translation against Lexical and Phrasal Variability [27.16741353384065]
Text-to-vis models often rely on lexical matching between words in the questions and tokens in data schemas.
In this study, we examine the robustness of current text-to-vis models, an area that has not previously been explored.
We propose a novel framework based on Retrieval-Augmented Generation (RAG) technique, named GRED, specifically designed to address input perturbations in two variants.
arXiv Detail & Related papers (2024-04-10T16:12:50Z) - Beyond Turing: A Comparative Analysis of Approaches for Detecting Machine-Generated Text [1.919654267936118]
Traditional shallow learning, Language Model (LM) fine-tuning, and Multilingual Model fine-tuning are evaluated.
Results reveal considerable differences in performance across methods.
This study paves the way for future research aimed at creating robust and highly discriminative models.
arXiv Detail & Related papers (2023-11-21T06:23:38Z) - How Well Do Text Embedding Models Understand Syntax? [50.440590035493074]
The ability of text embedding models to generalize across a wide range of syntactic contexts remains under-explored.
Our findings reveal that existing text embedding models have not sufficiently addressed these syntactic understanding challenges.
We propose strategies to augment the generalization ability of text embedding models in diverse syntactic scenarios.
arXiv Detail & Related papers (2023-11-14T08:51:00Z) - On the Explainability of Natural Language Processing Deep Models [3.0052400859458586]
Methods have been developed to address the challenges and present satisfactory explanations on Natural Language Processing (NLP) models.
Motivated to democratize ExAI methods in the NLP field, we present in this work a survey that studies model-agnostic as well as model-specific explainability methods on NLP models.
arXiv Detail & Related papers (2022-10-13T11:59:39Z) - Modeling Multi-Granularity Hierarchical Features for Relation Extraction [26.852869800344813]
We propose a novel method to extract multi-granularity features based solely on the original input sentences.
We show that effective structured features can be attained even without external knowledge.
arXiv Detail & Related papers (2022-04-09T09:44:05Z) - Generative Counterfactuals for Neural Networks via Attribute-Informed
Perturbation [51.29486247405601]
We design a framework to generate counterfactuals for raw data instances with the proposed Attribute-Informed Perturbation (AIP)
By utilizing generative models conditioned with different attributes, counterfactuals with desired labels can be obtained effectively and efficiently.
Experimental results on real-world texts and images demonstrate the effectiveness, sample quality as well as efficiency of our designed framework.
arXiv Detail & Related papers (2021-01-18T08:37:13Z) - Reverse Engineering Configurations of Neural Text Generation Models [86.9479386959155]
The study of artifacts that emerge in machine generated text as a result of modeling choices is a nascent research area.
We conduct an extensive suite of diagnostic tests to observe whether modeling choices leave detectable artifacts in the text they generate.
Our key finding, which is backed by a rigorous set of experiments, is that such artifacts are present and that different modeling choices can be inferred by observing the generated text alone.
arXiv Detail & Related papers (2020-04-13T21:02:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.