Precarity and Solidarity: Preliminary results on a study of queer and disabled fiction writers' experiences with generative AI
- URL: http://arxiv.org/abs/2412.04575v1
- Date: Thu, 05 Dec 2024 19:41:30 GMT
- Title: Precarity and Solidarity: Preliminary results on a study of queer and disabled fiction writers' experiences with generative AI
- Authors: C. E. Lamb, D. G. Brown, M. R. Grossman,
- Abstract summary: We find that queer and disabled writers are markedly more pessimistic than non-queer and non-disabled writers about the impact of AI on their industry.
We explore ways that generative AI exacerbates existing sources of instability and precarity in the publishing industry.
- Score: 0.0
- License:
- Abstract: We have undertaken a mixed-methods study of fiction writers' experiences and attitudes with generative AI, primarily focused on the experiences of queer and disabled writers. We find that queer and disabled writers are markedly more pessimistic than non-queer and non-disabled writers about the impact of AI on their industry, although pessimism is the majority attitude for both groups. We explore ways that generative AI exacerbates existing sources of instability and precarity in the publishing industry, reasons why writers are philosophically opposed to its use, and individual and collective strategies used by marginalized fiction writers to safeguard their industry from harms associated with generative AI.
Related papers
- "It was 80% me, 20% AI": Seeking Authenticity in Co-Writing with Large Language Models [97.22914355737676]
We examine whether and how writers want to preserve their authentic voice when co-writing with AI tools.
Our findings illuminate conceptions of authenticity in human-AI co-creation.
Readers' responses showed less concern about human-AI co-writing.
arXiv Detail & Related papers (2024-11-20T04:42:32Z) - Human Bias in the Face of AI: The Role of Human Judgement in AI Generated Text Evaluation [48.70176791365903]
This study explores how bias shapes the perception of AI versus human generated content.
We investigated how human raters respond to labeled and unlabeled content.
arXiv Detail & Related papers (2024-09-29T04:31:45Z) - Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI [67.58673784790375]
We argue that the 'bigger is better' AI paradigm is not only fragile scientifically, but comes with undesirable consequences.
First, it is not sustainable, as its compute demands increase faster than model performance, leading to unreasonable economic requirements and a disproportionate environmental footprint.
Second, it implies focusing on certain problems at the expense of others, leaving aside important applications, e.g. health, education, or the climate.
arXiv Detail & Related papers (2024-09-21T14:43:54Z) - Generative Ghosts: Anticipating Benefits and Risks of AI Afterlives [16.788923895022815]
In our lifetimes it may become common practice for people to create custom AI agents to interact with loved ones and/or the broader world after death.
We call these generative ghosts, since such agents will be capable of generating novel content rather than parroting content produced by their creator while living.
arXiv Detail & Related papers (2024-01-14T08:57:45Z) - The Future of AI-Assisted Writing [0.0]
We conduct a comparative user-study between such tools from an information retrieval lens: pull and push.
Our findings show that users welcome seamless assistance of AI in their writing.
Users also enjoyed the collaboration with AI-assisted writing tools and did not feel a lack of ownership.
arXiv Detail & Related papers (2023-06-29T02:46:45Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - The AI Ghostwriter Effect: When Users Do Not Perceive Ownership of
AI-Generated Text But Self-Declare as Authors [42.72188284211033]
We investigate authorship and ownership in human-AI collaboration for personalized language generation.
We show an AI Ghostwriter Effect: Users do not consider themselves the owners and authors of AI-generated text.
We discuss how our findings relate to psychological ownership and human-AI interaction to lay the foundations for adapting authorship frameworks.
arXiv Detail & Related papers (2023-03-06T16:53:12Z) - Creative Writing with an AI-Powered Writing Assistant: Perspectives from
Professional Writers [9.120878749348986]
Natural language generation (NLG) using neural language models has brought us closer than ever to the goal of building AI-powered creative writing tools.
Recent developments in natural language generation using neural language models have brought us closer than ever to the goal of building AI-powered creative writing tools.
arXiv Detail & Related papers (2022-11-09T17:00:56Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.