Systematic Rectification of Language Models via Dead-end Analysis
- URL: http://arxiv.org/abs/2302.14003v1
- Date: Mon, 27 Feb 2023 17:47:53 GMT
- Title: Systematic Rectification of Language Models via Dead-end Analysis
- Authors: Meng Cao and Mehdi Fatemi and Jackie Chi Kit Cheung and Samira
Shabanian
- Abstract summary: Large language models (LLM) can be pushed to generate toxic discourses.
Here, we center detoxification on the probability that the finished discourse is ultimately considered toxic.
Our approach, called rectification, utilizes a separate but significantly smaller model for detoxification.
- Score: 34.37598463459319
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: With adversarial or otherwise normal prompts, existing large language models
(LLM) can be pushed to generate toxic discourses. One way to reduce the risk of
LLMs generating undesired discourses is to alter the training of the LLM. This
can be very restrictive due to demanding computation requirements. Other
methods rely on rule-based or prompt-based token elimination, which are limited
as they dismiss future tokens and the overall meaning of the complete
discourse. Here, we center detoxification on the probability that the finished
discourse is ultimately considered toxic. That is, at each point, we advise
against token selections proportional to how likely a finished text from this
point will be toxic. To this end, we formally extend the dead-end theory from
the recent reinforcement learning (RL) literature to also cover uncertain
outcomes. Our approach, called rectification, utilizes a separate but
significantly smaller model for detoxification, which can be applied to diverse
LLMs as long as they share the same vocabulary. Importantly, our method does
not require access to the internal representations of the LLM, but only the
token probability distribution at each decoding step. This is crucial as many
LLMs today are hosted in servers and only accessible through APIs. When applied
to various LLMs, including GPT-3, our approach significantly improves the
generated discourse compared to the base LLMs and other techniques in terms of
both the overall language and detoxification performance.
Related papers
- Dialectal Toxicity Detection: Evaluating LLM-as-a-Judge Consistency Across Language Varieties [23.777874316083984]
There has been little systematic study on how dialectal differences affect toxicity detection by modern LLMs.
We create a multi-dialect dataset through synthetic transformations and human-assisted translations, covering 10 language clusters and 60 varieties.
We then evaluated three LLMs on their ability to assess toxicity across multilingual, dialectal, and LLM-human consistency.
arXiv Detail & Related papers (2024-11-17T03:53:24Z) - Toxic Subword Pruning for Dialogue Response Generation on Large Language Models [51.713448010799986]
We propose textbfToxic Subword textbfPruning (ToxPrune) to prune the subword contained by the toxic words from BPE in trained LLMs.
ToxPrune simultaneously improves the toxic language model NSFW-3B on the task of dialogue response generation obviously.
arXiv Detail & Related papers (2024-10-05T13:30:33Z) - Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from Disparate Training Data [9.31120925026271]
We study inductive out-of-context reasoning (OOCR) in which LLMs infer latent information from evidence distributed across training documents.
In one experiment we finetune an LLM on a corpus consisting only of distances between an unknown city and other known cities.
While OOCR succeeds in a range of cases, we also show that it is unreliable, particularly for smaller LLMs learning complex structures.
arXiv Detail & Related papers (2024-06-20T17:55:04Z) - Implicit Multimodal Alignment: On the Generalization of Frozen LLMs to Multimodal Inputs [63.29737699997859]
Large Language Models (LLMs) have demonstrated impressive performance on multimodal tasks, without any multimodal finetuning.
In this work, we expose frozen LLMs to image, video, audio and text inputs and analyse their internal representation.
arXiv Detail & Related papers (2024-05-26T21:31:59Z) - "Sorry, Come Again?" Prompting -- Enhancing Comprehension and Diminishing Hallucination with [PAUSE]-injected Optimal Paraphrasing [10.20632187568563]
Hallucination has emerged as the most vulnerable aspect of contemporary Large Language Models (LLMs)
In this paper, we introduce the Sorry, Come Again (SCA) prompting, aimed to avoid LLM hallucinations.
We provide an in-depth analysis of linguistic nuances: formality, readability, and concreteness of prompts for 21 LLMs.
We propose an optimal paraphrasing technique to identify the most comprehensible paraphrase of a given prompt.
arXiv Detail & Related papers (2024-03-27T19:45:09Z) - FAC$^2$E: Better Understanding Large Language Model Capabilities by Dissociating Language and Cognition [56.76951887823882]
Large language models (LLMs) are primarily evaluated by overall performance on various text understanding and generation tasks.
We present FAC$2$E, a framework for Fine-grAined and Cognition-grounded LLMs' Capability Evaluation.
arXiv Detail & Related papers (2024-02-29T21:05:37Z) - Can LLMs Compute with Reasons? [4.995189458714599]
Large language models (LLMs) often struggle with complex mathematical tasks, prone to "hallucinating" incorrect answers.
We propose an "Inductive Learning" approach utilizing a distributed network of Small LangSLMs.
arXiv Detail & Related papers (2024-02-19T12:04:25Z) - AlignedCoT: Prompting Large Language Models via Native-Speaking Demonstrations [52.43593893122206]
Alignedcot is an in-context learning technique for invoking Large Language Models.
It achieves consistent and correct step-wise prompts in zero-shot scenarios.
We conduct experiments on mathematical reasoning and commonsense reasoning.
arXiv Detail & Related papers (2023-11-22T17:24:21Z) - Let Models Speak Ciphers: Multiagent Debate through Embeddings [84.20336971784495]
We introduce CIPHER (Communicative Inter-Model Protocol Through Embedding Representation) to address this issue.
By deviating from natural language, CIPHER offers an advantage of encoding a broader spectrum of information without any modification to the model weights.
This showcases the superiority and robustness of embeddings as an alternative "language" for communication among LLMs.
arXiv Detail & Related papers (2023-10-10T03:06:38Z) - Statistical Knowledge Assessment for Large Language Models [79.07989821512128]
Given varying prompts regarding a factoid question, can a large language model (LLM) reliably generate factually correct answers?
We propose KaRR, a statistical approach to assess factual knowledge for LLMs.
Our results reveal that the knowledge in LLMs with the same backbone architecture adheres to the scaling law, while tuning on instruction-following data sometimes compromises the model's capability to generate factually correct text reliably.
arXiv Detail & Related papers (2023-05-17T18:54:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.