Perceived Trustworthiness of Natural Language Generators
- URL: http://arxiv.org/abs/2305.18176v1
- Date: Mon, 29 May 2023 16:09:58 GMT
- Title: Perceived Trustworthiness of Natural Language Generators
- Authors: Beatriz Cabrero-Daniel and Andrea Sanagust\'in Cabrero
- Abstract summary: The paper addresses the problem of understanding how different users perceive and adopt Natural Language Generation tools.
It also discusses the perceived advantages and limitations of Natural Language Generation tools.
The paper sheds light on how different user characteristics shape their beliefs on the quality and overall trustworthiness of machine-generated text.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Natural Language Generation tools, such as chatbots that can generate
human-like conversational text, are becoming more common both for personal and
professional use. However, there are concerns about their trustworthiness and
ethical implications. The paper addresses the problem of understanding how
different users (e.g., linguists, engineers) perceive and adopt these tools and
their perception of machine-generated text quality. It also discusses the
perceived advantages and limitations of Natural Language Generation tools, as
well as users' beliefs on governance strategies. The main findings of this
study include the impact of users' field and level of expertise on the
perceived trust and adoption of Natural Language Generation tools, the users'
assessment of the accuracy, fluency, and potential biases of machine-generated
text in comparison to human-written text, and an analysis of the advantages and
ethical risks associated with these tools as identified by the participants.
Moreover, this paper discusses the potential implications of these findings for
enhancing the AI development process. The paper sheds light on how different
user characteristics shape their beliefs on the quality and overall
trustworthiness of machine-generated text. Furthermore, it examines the
benefits and risks of these tools from the perspectives of different users.
Related papers
- Beyond Turing Test: Can GPT-4 Sway Experts' Decisions? [14.964922012236498]
This paper explores how generated text impacts readers' decisions, focusing on both amateur and expert audiences.
Our findings indicate that GPT-4 can generate persuasive analyses affecting the decisions of both amateurs and professionals.
The results highlight a high correlation between real-world evaluation through audience reactions and the current multi-dimensional evaluators commonly used for generative models.
arXiv Detail & Related papers (2024-09-25T07:55:36Z) - Detection of Machine-Generated Text: Literature Survey [0.0]
This literature survey aims to compile and synthesize accomplishments and developments in the field of machine-generated text.
It also gives an overview of machine-generated text trends and explores the larger societal implications.
arXiv Detail & Related papers (2024-01-02T01:44:15Z) - RELIC: Investigating Large Language Model Responses using Self-Consistency [58.63436505595177]
Large Language Models (LLMs) are notorious for blending fact with fiction and generating non-factual content, known as hallucinations.
We propose an interactive system that helps users gain insight into the reliability of the generated text.
arXiv Detail & Related papers (2023-11-28T14:55:52Z) - Do You Trust ChatGPT? -- Perceived Credibility of Human and AI-Generated
Content [0.8602553195689513]
This paper examines how individuals perceive the credibility of content originating from human authors versus content generated by large language models.
Surprisingly, our results demonstrate that regardless of the user interface presentation, participants tend to attribute similar levels of credibility.
Participants also do not report any different perceptions of competence and trustworthiness between human and AI-generated content.
arXiv Detail & Related papers (2023-09-05T18:29:29Z) - Analysis of the Evolution of Advanced Transformer-Based Language Models:
Experiments on Opinion Mining [0.5735035463793008]
This paper studies the behaviour of the cutting-edge Transformer-based language models on opinion mining.
Our comparative study shows leads and paves the way for production engineers regarding the approach to focus on.
arXiv Detail & Related papers (2023-08-07T01:10:50Z) - Natural Language Decompositions of Implicit Content Enable Better Text
Representations [56.85319224208865]
We introduce a method for the analysis of text that takes implicitly communicated content explicitly into account.
We use a large language model to produce sets of propositions that are inferentially related to the text that has been observed.
Our results suggest that modeling the meanings behind observed language, rather than the literal text alone, is a valuable direction for NLP.
arXiv Detail & Related papers (2023-05-23T23:45:20Z) - Interactive Natural Language Processing [67.87925315773924]
Interactive Natural Language Processing (iNLP) has emerged as a novel paradigm within the field of NLP.
This paper offers a comprehensive survey of iNLP, starting by proposing a unified definition and framework of the concept.
arXiv Detail & Related papers (2023-05-22T17:18:29Z) - Unlocking the Potential of ChatGPT: A Comprehensive Exploration of its Applications, Advantages, Limitations, and Future Directions in Natural Language Processing [4.13365552362244]
ChatGPT has been successfully applied in numerous areas, including chatbots, content generation, language translation, personalized recommendations, and even medical diagnosis and treatment.
Its success in these applications can be attributed to its ability to generate human-like responses, understand natural language, and adapt to different contexts.
This article provides a comprehensive overview of ChatGPT, its applications, advantages, and limitations.
arXiv Detail & Related papers (2023-03-27T21:27:58Z) - COFFEE: Counterfactual Fairness for Personalized Text Generation in
Explainable Recommendation [56.520470678876656]
bias inherent in user written text can associate different levels of linguistic quality with users' protected attributes.
We introduce a general framework to achieve measure-specific counterfactual fairness in explanation generation.
arXiv Detail & Related papers (2022-10-14T02:29:10Z) - AI Explainability 360: Impact and Design [120.95633114160688]
In 2019, we created AI Explainability 360 (Arya et al. 2020), an open source software toolkit featuring ten diverse and state-of-the-art explainability methods.
This paper examines the impact of the toolkit with several case studies, statistics, and community feedback.
The paper also describes the flexible design of the toolkit, examples of its use, and the significant educational material and documentation available to its users.
arXiv Detail & Related papers (2021-09-24T19:17:09Z) - On-the-Fly Controlled Text Generation with Experts and Anti-Experts [70.41630506059113]
We propose DExperts: Decoding-time Experts, a decoding-time method for controlled text generation.
Under our ensemble, output tokens only get high probability if they are considered likely by the experts, and unlikely by the anti-experts.
arXiv Detail & Related papers (2021-05-07T01:19:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.