Keep It Private: Unsupervised Privatization of Online Text
- URL: http://arxiv.org/abs/2405.10260v1
- Date: Thu, 16 May 2024 17:12:18 GMT
- Title: Keep It Private: Unsupervised Privatization of Online Text
- Authors: Calvin Bao, Marine Carpuat,
- Abstract summary: We introduce an automatic text privatization framework that fine-tunes a large language model via reinforcement learning to produce rewrites that balance soundness, sense, and privacy.
We evaluate it extensively on a large-scale test set of English Reddit posts by 68k authors composed of short-medium length texts.
- Score: 13.381890596224867
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Authorship obfuscation techniques hold the promise of helping people protect their privacy in online communications by automatically rewriting text to hide the identity of the original author. However, obfuscation has been evaluated in narrow settings in the NLP literature and has primarily been addressed with superficial edit operations that can lead to unnatural outputs. In this work, we introduce an automatic text privatization framework that fine-tunes a large language model via reinforcement learning to produce rewrites that balance soundness, sense, and privacy. We evaluate it extensively on a large-scale test set of English Reddit posts by 68k authors composed of short-medium length texts. We study how the performance changes among evaluative conditions including authorial profile length and authorship detection strategy. Our method maintains high text quality according to both automated metrics and human evaluation, and successfully evades several automated authorship attacks.
Related papers
- TAROT: Task-Oriented Authorship Obfuscation Using Policy Optimization Methods [5.239989658197324]
Authorship obfuscation aims to disguise the identity of an author within a text.
This alteration needs to balance privacy and utility.
We propose TAROT: Task-Oriented Authorship Obfuscation Using Policy Optimization.
arXiv Detail & Related papers (2024-07-31T14:24:01Z) - IncogniText: Privacy-enhancing Conditional Text Anonymization via LLM-based Private Attribute Randomization [8.483679748399037]
We propose IncogniText, a technique that anonymizes the text to mislead a potential adversary into predicting a wrong private attribute value.
Our empirical evaluation shows a reduction of private attribute leakage by more than 90%.
arXiv Detail & Related papers (2024-07-03T09:49:03Z) - NAP^2: A Benchmark for Naturalness and Privacy-Preserving Text Rewriting by Learning from Human [55.20137833039499]
We suggest sanitizing sensitive text using two common strategies used by humans.
We curate the first corpus, coined NAP2, through both crowdsourcing and the use of large language models.
arXiv Detail & Related papers (2024-06-06T05:07:44Z) - Just Rewrite It Again: A Post-Processing Method for Enhanced Semantic Similarity and Privacy Preservation of Differentially Private Rewritten Text [3.3916160303055567]
We propose a simple post-processing method based on the goal of aligning rewritten texts with their original counterparts.
Our results show that such an approach not only produces outputs that are more semantically reminiscent of the original inputs, but also texts which score on average better in empirical privacy evaluations.
arXiv Detail & Related papers (2024-05-30T08:41:33Z) - Large Language Models are Advanced Anonymizers [13.900633576526863]
We show how adversarial anonymization outperforms current industry-grade anonymizers in terms of the resulting utility and privacy.
We first present a new setting for evaluating anonymizations in the face of adversarial LLMs inferences.
arXiv Detail & Related papers (2024-02-21T14:44:00Z) - JAMDEC: Unsupervised Authorship Obfuscation using Constrained Decoding
over Small Language Models [53.83273575102087]
We propose an unsupervised inference-time approach to authorship obfuscation.
We introduce JAMDEC, a user-controlled, inference-time algorithm for authorship obfuscation.
Our approach builds on small language models such as GPT2-XL in order to help avoid disclosing the original content to proprietary LLM's APIs.
arXiv Detail & Related papers (2024-02-13T19:54:29Z) - Understanding writing style in social media with a supervised
contrastively pre-trained transformer [57.48690310135374]
Online Social Networks serve as fertile ground for harmful behavior, ranging from hate speech to the dissemination of disinformation.
We introduce the Style Transformer for Authorship Representations (STAR), trained on a large corpus derived from public sources of 4.5 x 106 authored texts.
Using a support base of 8 documents of 512 tokens, we can discern authors from sets of up to 1616 authors with at least 80% accuracy.
arXiv Detail & Related papers (2023-10-17T09:01:17Z) - Protecting Anonymous Speech: A Generative Adversarial Network
Methodology for Removing Stylistic Indicators in Text [2.9005223064604078]
We develop a new approach to authorship anonymization by constructing a generative adversarial network.
Our fully automatic method achieves comparable results to other methods in terms of content preservation and fluency.
Our approach is able to generalize well to an open-set context and anonymize sentences from authors it has not encountered before.
arXiv Detail & Related papers (2021-10-18T17:45:56Z) - On-the-Fly Controlled Text Generation with Experts and Anti-Experts [70.41630506059113]
We propose DExperts: Decoding-time Experts, a decoding-time method for controlled text generation.
Under our ensemble, output tokens only get high probability if they are considered likely by the experts, and unlikely by the anti-experts.
arXiv Detail & Related papers (2021-05-07T01:19:38Z) - TextHide: Tackling Data Privacy in Language Understanding Tasks [54.11691303032022]
TextHide mitigates privacy risks without slowing down training or reducing accuracy.
It requires all participants to add a simple encryption step to prevent an eavesdropping attacker from recovering private text data.
We evaluate TextHide on the GLUE benchmark, and our experiments show that TextHide can effectively defend attacks on shared gradients or representations.
arXiv Detail & Related papers (2020-10-12T22:22:15Z) - Adversarial Watermarking Transformer: Towards Tracing Text Provenance
with Data Hiding [80.3811072650087]
We study natural language watermarking as a defense to help better mark and trace the provenance of text.
We introduce the Adversarial Watermarking Transformer (AWT) with a jointly trained encoder-decoder and adversarial training.
AWT is the first end-to-end model to hide data in text by automatically learning -- without ground truth -- word substitutions along with their locations.
arXiv Detail & Related papers (2020-09-07T11:01:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.