UID as a Guiding Metric for Automated Authorship Obfuscation
- URL: http://arxiv.org/abs/2312.03709v1
- Date: Sun, 5 Nov 2023 22:16:37 GMT
- Title: UID as a Guiding Metric for Automated Authorship Obfuscation
- Authors: Nicholas Abegg
- Abstract summary: Automated authorship attributors are capable of attributing the author of a text amongst a pool of authors with great accuracy.
In order to counter the rise of these automated attributors, there has also been a rise of automated obfuscators.
We devised three novel authorship obfuscation methods that utilize a Psycho-linguistic theory known as Uniform Information Density (UID) theory.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Protecting the anonymity of authors has become a difficult task given the
rise of automated authorship attributors. These attributors are capable of
attributing the author of a text amongst a pool of authors with great accuracy.
In order to counter the rise of these automated attributors, there has also
been a rise of automated obfuscators. These obfuscators are capable of taking
some text, perturbing the text in some manner, and, if successful, deceive an
automated attributor in misattributing the wrong author. We devised three novel
authorship obfuscation methods that utilized a Psycho-linguistic theory known
as Uniform Information Density (UID) theory. This theory states that humans
evenly distribute information amongst speech or text so as to maximize
efficiency. Utilizing this theory in our three obfuscation methods, we
attempted to see how successfully we could deceive two separate attributors.
Obfuscating 50 human and 50 GPT-3 generated articles from the TuringBench
dataset, we observed how well each method did on deceiving the attributors.
While the quality of the obfuscation in terms of semantic preservation and
sensical changes was high, we were not able to find any evidence to indicate
UID was a viable guiding metric for obfuscation. However, due to restrictions
in time we were unable to test a large enough sample of article or tune the
parameters for our attributors to comment conclusively on UID in obfuscation.
Related papers
- Keep It Private: Unsupervised Privatization of Online Text [13.381890596224867]
We introduce an automatic text privatization framework that fine-tunes a large language model via reinforcement learning to produce rewrites that balance soundness, sense, and privacy.
We evaluate it extensively on a large-scale test set of English Reddit posts by 68k authors composed of short-medium length texts.
arXiv Detail & Related papers (2024-05-16T17:12:18Z) - Forging the Forger: An Attempt to Improve Authorship Verification via Data Augmentation [52.72682366640554]
Authorship Verification (AV) is a text classification task concerned with inferring whether a candidate text has been written by one specific author or by someone else.
It has been shown that many AV systems are vulnerable to adversarial attacks, where a malicious author actively tries to fool the classifier by either concealing their writing style, or by imitating the style of another author.
arXiv Detail & Related papers (2024-03-17T16:36:26Z) - JAMDEC: Unsupervised Authorship Obfuscation using Constrained Decoding
over Small Language Models [53.83273575102087]
We propose an unsupervised inference-time approach to authorship obfuscation.
We introduce JAMDEC, a user-controlled, inference-time algorithm for authorship obfuscation.
Our approach builds on small language models such as GPT2-XL in order to help avoid disclosing the original content to proprietary LLM's APIs.
arXiv Detail & Related papers (2024-02-13T19:54:29Z) - AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language Models [54.95912006700379]
We introduce AutoDAN, a novel jailbreak attack against aligned Large Language Models.
AutoDAN can automatically generate stealthy jailbreak prompts by the carefully designed hierarchical genetic algorithm.
arXiv Detail & Related papers (2023-10-03T19:44:37Z) - Paraphrasing evades detectors of AI-generated text, but retrieval is an
effective defense [56.077252790310176]
We present a paraphrase generation model (DIPPER) that can paraphrase paragraphs, condition on surrounding context, and control lexical diversity and content reordering.
Using DIPPER to paraphrase text generated by three large language models (including GPT3.5-davinci-003) successfully evades several detectors, including watermarking.
We introduce a simple defense that relies on retrieving semantically-similar generations and must be maintained by a language model API provider.
arXiv Detail & Related papers (2023-03-23T16:29:27Z) - Can AI-Generated Text be Reliably Detected? [54.670136179857344]
Unregulated use of LLMs can potentially lead to malicious consequences such as plagiarism, generating fake news, spamming, etc.
Recent works attempt to tackle this problem either using certain model signatures present in the generated text outputs or by applying watermarking techniques.
In this paper, we show that these detectors are not reliable in practical scenarios.
arXiv Detail & Related papers (2023-03-17T17:53:19Z) - A Girl Has A Name, And It's ... Adversarial Authorship Attribution for
Deobfuscation [9.558392439655014]
We show that adversarially trained authorship attributors are able to degrade the effectiveness of existing obfuscators.
Our results underline the need for stronger obfuscation approaches that are resistant to deobfuscation.
arXiv Detail & Related papers (2022-03-22T16:26:09Z) - Protecting Anonymous Speech: A Generative Adversarial Network
Methodology for Removing Stylistic Indicators in Text [2.9005223064604078]
We develop a new approach to authorship anonymization by constructing a generative adversarial network.
Our fully automatic method achieves comparable results to other methods in terms of content preservation and fluency.
Our approach is able to generalize well to an open-set context and anonymize sentences from authors it has not encountered before.
arXiv Detail & Related papers (2021-10-18T17:45:56Z) - Avengers Ensemble! Improving Transferability of Authorship Obfuscation [7.962140902232626]
Stylometric approaches have been shown to be quite effective for real-world authorship attribution.
We propose an ensemble-based approach for transferable authorship obfuscation.
arXiv Detail & Related papers (2021-09-15T00:11:40Z) - Towards Variable-Length Textual Adversarial Attacks [68.27995111870712]
It is non-trivial to conduct textual adversarial attacks on natural language processing tasks due to the discreteness of data.
In this paper, we propose variable-length textual adversarial attacks(VL-Attack)
Our method can achieve $33.18$ BLEU score on IWSLT14 German-English translation, achieving an improvement of $1.47$ over the baseline model.
arXiv Detail & Related papers (2021-04-16T14:37:27Z) - A Girl Has A Name: Detecting Authorship Obfuscation [12.461503242570643]
Authorship attribution aims to identify the author of a text based on the stylometric analysis.
Authorship obfuscation aims to protect against authorship attribution by modifying a text's style.
We evaluate the stealthiness of state-of-the-art authorship obfuscation methods under an adversarial threat model.
arXiv Detail & Related papers (2020-05-02T04:52:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.