Can You Fool AI by Doing a 180? $\unicode{x2013}$ A Case Study on
Authorship Analysis of Texts by Arata Osada
- URL: http://arxiv.org/abs/2207.09085v1
- Date: Tue, 19 Jul 2022 05:43:49 GMT
- Title: Can You Fool AI by Doing a 180? $\unicode{x2013}$ A Case Study on
Authorship Analysis of Texts by Arata Osada
- Authors: Jagna Nieuwazny, Karol Nowakowski, Michal Ptaszynski, Fumito Masui
- Abstract summary: This paper is an attempt at answering a twofold question covering the areas of ethics and authorship analysis.
Firstly, we were interested in finding out whether it would be possible for an author identification system to correctly attribute works to authors if in the course of years they have undergone a major psychological transition.
Secondly, and from the point of view of the evolution of an author's ethical values, we checked what it would mean if the authorship attribution system encounters difficulties in detecting single authorship.
- Score: 2.6954666679827137
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper is our attempt at answering a twofold question covering the areas
of ethics and authorship analysis. Firstly, since the methods used for
performing authorship analysis imply that an author can be recognized by the
content he or she creates, we were interested in finding out whether it would
be possible for an author identification system to correctly attribute works to
authors if in the course of years they have undergone a major psychological
transition. Secondly, and from the point of view of the evolution of an
author's ethical values, we checked what it would mean if the authorship
attribution system encounters difficulties in detecting single authorship. We
set out to answer those questions through performing a binary authorship
analysis task using a text classifier based on a pre-trained transformer model
and a baseline method relying on conventional similarity metrics. For the test
set, we chose works of Arata Osada, a Japanese educator and specialist in the
history of education, with half of them being books written before the World
War II and another half in the 1950s, in between which he underwent a
transformation in terms of political opinions. As a result, we were able to
confirm that in the case of texts authored by Arata Osada in a time span of
more than 10 years, while the classification accuracy drops by a large margin
and is substantially lower than for texts by other non-fiction writers,
confidence scores of the predictions remain at a similar level as in the case
of a shorter time span, indicating that the classifier was in many instances
tricked into deciding that texts written over a time span of multiple years
were actually written by two different people, which in turn leads us to
believe that such a change can affect authorship analysis, and that historical
events have great impact on a person's ethical outlook as expressed in their
writings.
Related papers
- A Bayesian Approach to Harnessing the Power of LLMs in Authorship Attribution [57.309390098903]
Authorship attribution aims to identify the origin or author of a document.
Large Language Models (LLMs) with their deep reasoning capabilities and ability to maintain long-range textual associations offer a promising alternative.
Our results on the IMDb and blog datasets show an impressive 85% accuracy in one-shot authorship classification across ten authors.
arXiv Detail & Related papers (2024-10-29T04:14:23Z) - Forging the Forger: An Attempt to Improve Authorship Verification via Data Augmentation [52.72682366640554]
Authorship Verification (AV) is a text classification task concerned with inferring whether a candidate text has been written by one specific author or by someone else.
It has been shown that many AV systems are vulnerable to adversarial attacks, where a malicious author actively tries to fool the classifier by either concealing their writing style, or by imitating the style of another author.
arXiv Detail & Related papers (2024-03-17T16:36:26Z) - A Literature Review of Literature Reviews in Pattern Analysis and Machine Intelligence [58.6354685593418]
This paper proposes several article-level, field-normalized, and large language model-empowered bibliometric indicators to evaluate reviews.
The newly emerging AI-generated literature reviews are also appraised.
This work offers insights into the current challenges of literature reviews and envisions future directions for their development.
arXiv Detail & Related papers (2024-02-20T11:28:50Z) - A Ship of Theseus: Curious Cases of Paraphrasing in LLM-Generated Texts [11.430810978707173]
Our research delves into an intriguing question: Does a text retain its original authorship when it undergoes numerous paraphrasing?
Using a computational approach, we discover that the diminishing performance in text classification models, with each successive paraphrasing, is closely associated with the extent of deviation from the original author's style.
arXiv Detail & Related papers (2023-11-14T18:40:42Z) - Same or Different? Diff-Vectors for Authorship Analysis [78.83284164605473]
In classic'' authorship analysis a feature vector represents a document, the value of a feature represents (an increasing function of) the relative frequency of the feature in the document, and the class label represents the author of the document.
Our experiments tackle same-author verification, authorship verification, and closed-set authorship attribution; while DVs are naturally geared for solving the 1st, we also provide two novel methods for solving the 2nd and 3rd.
arXiv Detail & Related papers (2023-01-24T08:48:12Z) - PART: Pre-trained Authorship Representation Transformer [64.78260098263489]
Authors writing documents imprint identifying information within their texts: vocabulary, registry, punctuation, misspellings, or even emoji usage.
Previous works use hand-crafted features or classification tasks to train their authorship models, leading to poor performance on out-of-domain authors.
We propose a contrastively trained model fit to learn textbfauthorship embeddings instead of semantics.
arXiv Detail & Related papers (2022-09-30T11:08:39Z) - TraSE: Towards Tackling Authorial Style from a Cognitive Science
Perspective [4.123763595394021]
Authorship attribution experiments with over 27,000 authors and 1.4 million samples in a cross-domain scenario resulted in 90% attribution accuracy.
A qualitative analysis is performed on TraSE using physical human characteristics, like age, to validate its claim on capturing cognitive traits.
arXiv Detail & Related papers (2022-06-21T19:55:07Z) - LG4AV: Combining Language Models and Graph Neural Networks for Author
Verification [0.11421942894219898]
We present our novel approach LG4AV which combines language models and graph neural networks for authorship verification.
By directly feeding the available texts in a pre-trained transformer architecture, our model does not need any hand-crafted stylometric features.
Our model can benefit from relations between authors that are meaningful with respect to the verification process.
arXiv Detail & Related papers (2021-09-03T12:45:28Z) - A computational model implementing subjectivity with the 'Room Theory'.
The case of detecting Emotion from Text [68.8204255655161]
This work introduces a new method to consider subjectivity and general context dependency in text analysis.
By using similarity measure between words, we are able to extract the relative relevance of the elements in the benchmark.
This method could be applied to all the cases where evaluating subjectivity is relevant to understand the relative value or meaning of a text.
arXiv Detail & Related papers (2020-05-12T21:26:04Z) - Automatic Identification of Types of Alterations in Historical
Manuscripts [0.0]
We present a machine learning-based approach to help categorize alterations in documents.
In particular, we present a new probabilistic model that categorizes content-related alterations.
On unlabelled data, applying alterLDA leads to interesting new insights into the alteration behavior of authors, editors and other manuscript contributors.
arXiv Detail & Related papers (2020-03-20T08:05:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.