Comprehensive Evaluation of ChatGPT Reliability Through Multilingual
Inquiries
- URL: http://arxiv.org/abs/2312.10524v1
- Date: Sat, 16 Dec 2023 19:44:48 GMT
- Title: Comprehensive Evaluation of ChatGPT Reliability Through Multilingual
Inquiries
- Authors: Poorna Chander Reddy Puttaparthi, Soham Sanjay Deo, Hakan Gul, Yiming
Tang, Weiyi Shang, Zhe Yu
- Abstract summary: ChatGPT is the most popular large language model (LLM) with over 100 million users.
Due to the presence of jailbreak vulnerabilities, ChatGPT might have negative effects on people's lives.
We investigated whether multilingual wrapping can indeed lead to ChatGPT's jailbreak.
- Score: 10.140483464820935
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: ChatGPT is currently the most popular large language model (LLM), with over
100 million users, making a significant impact on people's lives. However, due
to the presence of jailbreak vulnerabilities, ChatGPT might have negative
effects on people's lives, potentially even facilitating criminal activities.
Testing whether ChatGPT can cause jailbreak is crucial because it can enhance
ChatGPT's security, reliability, and social responsibility. Inspired by
previous research revealing the varied performance of LLMs in different
language translations, we suspected that wrapping prompts in multiple languages
might lead to ChatGPT jailbreak. To investigate this, we designed a study with
a fuzzing testing approach to analyzing ChatGPT's cross-linguistic proficiency.
Our study includes three strategies by automatically posing different formats
of malicious questions to ChatGPT: (1) each malicious question involving only
one language, (2) multilingual malicious questions, (3) specifying that ChatGPT
responds in a language different from the prompts. In addition, we also combine
our strategies by utilizing prompt injection templates to wrap the three
aforementioned types of questions. We examined a total of 7,892 Q&A data
points, discovering that multilingual wrapping can indeed lead to ChatGPT's
jailbreak, with different wrapping methods having varying effects on jailbreak
probability. Prompt injection can amplify the probability of jailbreak caused
by multilingual wrapping. This work provides insights for OpenAI developers to
enhance ChatGPT's support for language diversity and inclusion.
Related papers
- Exploring ChatGPT's Capabilities on Vulnerability Management [56.4403395100589]
We explore ChatGPT's capabilities on 6 tasks involving the complete vulnerability management process with a large-scale dataset containing 70,346 samples.
One notable example is ChatGPT's proficiency in tasks like generating titles for software bug reports.
Our findings reveal the difficulties encountered by ChatGPT and shed light on promising future directions.
arXiv Detail & Related papers (2023-11-11T11:01:13Z) - Primacy Effect of ChatGPT [69.49920102917598]
We study the primacy effect of ChatGPT: the tendency of selecting the labels at earlier positions as the answer.
We hope that our experiments and analyses provide additional insights into building more reliable ChatGPT-based solutions.
arXiv Detail & Related papers (2023-10-20T00:37:28Z) - Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study [22.411634418082368]
Large Language Models (LLMs) have demonstrated vast potential but also introduce challenges related to content constraints and potential misuse.
Our study investigates three key research questions: (1) the number of different prompt types that can jailbreak LLMs, (2) the effectiveness of jailbreak prompts in circumventing LLM constraints, and (3) the resilience of ChatGPT against these jailbreak prompts.
arXiv Detail & Related papers (2023-05-23T09:33:38Z) - ChatGPT Beyond English: Towards a Comprehensive Evaluation of Large
Language Models in Multilingual Learning [70.57126720079971]
Large language models (LLMs) have emerged as the most important breakthroughs in natural language processing (NLP)
This paper evaluates ChatGPT on 7 different tasks, covering 37 diverse languages with high, medium, low, and extremely low resources.
Compared to the performance of previous models, our extensive experimental results demonstrate a worse performance of ChatGPT for different NLP tasks and languages.
arXiv Detail & Related papers (2023-04-12T05:08:52Z) - Let's have a chat! A Conversation with ChatGPT: Technology,
Applications, and Limitations [0.0]
Chat Generative Pre-trained Transformer, better known as ChatGPT, can generate human-like sentences and write coherent essays.
Potential applications of ChatGPT in various domains, including healthcare, education, and research, are highlighted.
Despite promising results, there are several privacy and ethical concerns surrounding ChatGPT.
arXiv Detail & Related papers (2023-02-27T14:26:29Z) - Can ChatGPT Understand Too? A Comparative Study on ChatGPT and
Fine-tuned BERT [103.57103957631067]
ChatGPT has attracted great attention, as it can generate fluent and high-quality responses to human inquiries.
We evaluate ChatGPT's understanding ability by evaluating it on the most popular GLUE benchmark, and comparing it with 4 representative fine-tuned BERT-style models.
We find that: 1) ChatGPT falls short in handling paraphrase and similarity tasks; 2) ChatGPT outperforms all BERT models on inference tasks by a large margin; 3) ChatGPT achieves comparable performance compared with BERT on sentiment analysis and question answering tasks.
arXiv Detail & Related papers (2023-02-19T12:29:33Z) - A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on
Reasoning, Hallucination, and Interactivity [79.12003701981092]
We carry out an extensive technical evaluation of ChatGPT using 23 data sets covering 8 different common NLP application tasks.
We evaluate the multitask, multilingual and multi-modal aspects of ChatGPT based on these data sets and a newly designed multimodal dataset.
ChatGPT is 63.41% accurate on average in 10 different reasoning categories under logical reasoning, non-textual reasoning, and commonsense reasoning.
arXiv Detail & Related papers (2023-02-08T12:35:34Z) - Is ChatGPT a General-Purpose Natural Language Processing Task Solver? [113.22611481694825]
Large language models (LLMs) have demonstrated the ability to perform a variety of natural language processing (NLP) tasks zero-shot.
Recently, the debut of ChatGPT has drawn a great deal of attention from the natural language processing (NLP) community.
It is not yet known whether ChatGPT can serve as a generalist model that can perform many NLP tasks zero-shot.
arXiv Detail & Related papers (2023-02-08T09:44:51Z) - How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation,
and Detection [8.107721810172112]
ChatGPT is able to respond effectively to a wide range of human questions.
People are starting to worry about the potential negative impacts that large language models (LLMs) like ChatGPT could have on society.
In this work, we collected tens of thousands of comparison responses from both human experts and ChatGPT.
arXiv Detail & Related papers (2023-01-18T15:23:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.