Let Models Speak Ciphers: Multiagent Debate through Embeddings
- URL: http://arxiv.org/abs/2310.06272v2
- Date: Mon, 26 Feb 2024 17:36:48 GMT
- Title: Let Models Speak Ciphers: Multiagent Debate through Embeddings
- Authors: Chau Pham, Boyi Liu, Yingxiang Yang, Zhengyu Chen, Tianyi Liu, Jianbo
Yuan, Bryan A. Plummer, Zhaoran Wang, Hongxia Yang
- Abstract summary: We introduce CIPHER (Communicative Inter-Model Protocol Through Embedding Representation) to address this issue.
By deviating from natural language, CIPHER offers an advantage of encoding a broader spectrum of information without any modification to the model weights.
This showcases the superiority and robustness of embeddings as an alternative "language" for communication among LLMs.
- Score: 84.20336971784495
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Discussion and debate among Large Language Models (LLMs) have gained
considerable attention due to their potential to enhance the reasoning ability
of LLMs. Although natural language is an obvious choice for communication due
to LLM's language understanding capability, the token sampling step needed when
generating natural language poses a potential risk of information loss, as it
uses only one token to represent the model's belief across the entire
vocabulary. In this paper, we introduce a communication regime named CIPHER
(Communicative Inter-Model Protocol Through Embedding Representation) to
address this issue. Specifically, we remove the token sampling step from LLMs
and let them communicate their beliefs across the vocabulary through the
expectation of the raw transformer output embeddings. Remarkably, by deviating
from natural language, CIPHER offers an advantage of encoding a broader
spectrum of information without any modification to the model weights,
outperforming the state-of-the-art LLM debate methods using natural language by
0.5-5.0% across five reasoning tasks and multiple open-source LLMs of varying
sizes. This showcases the superiority and robustness of embeddings as an
alternative "language" for communication among LLMs. We anticipate that CIPHER
will inspire further exploration for the design of interactions within LLM
agent systems, offering a new direction that could significantly influence
future developments in the field.
Related papers
- Bridging the Language Gap: Enhancing Multilingual Prompt-Based Code Generation in LLMs via Zero-Shot Cross-Lingual Transfer [5.355430735475281]
This paper investigates the complexities of multilingual prompt-based code generation.
Our evaluations reveal significant disparities in code quality for non-English prompts.
We propose a zero-shot cross-lingual approach using a neural projection technique.
arXiv Detail & Related papers (2024-08-19T05:11:46Z) - Large Language Models are Interpretable Learners [53.56735770834617]
In this paper, we show a combination of Large Language Models (LLMs) and symbolic programs can bridge the gap between expressiveness and interpretability.
The pretrained LLM with natural language prompts provides a massive set of interpretable modules that can transform raw input into natural language concepts.
As the knowledge learned by LSP is a combination of natural language descriptions and symbolic rules, it is easily transferable to humans (interpretable) and other LLMs.
arXiv Detail & Related papers (2024-06-25T02:18:15Z) - MindMerger: Efficient Boosting LLM Reasoning in non-English Languages [26.334092384176518]
Reasoning capabilities are crucial for Large Language Models (LLMs)
We propose MindMerger, which merges LLMs with the external language understanding capabilities from multilingual models.
MindMerger consistently outperforms all baselines, especially in low-resource languages.
arXiv Detail & Related papers (2024-05-27T17:41:54Z) - Beyond Natural Language: LLMs Leveraging Alternative Formats for Enhanced Reasoning and Communication [79.79948834910579]
Natural language (NL) has long been the predominant format for human cognition and communication.
In this work, we challenge the default use of NL by exploring the utility of non-NL formats in different contexts.
arXiv Detail & Related papers (2024-02-28T16:07:54Z) - Language-Specific Neurons: The Key to Multilingual Capabilities in Large Language Models [117.20416338476856]
Large language models (LLMs) demonstrate remarkable multilingual capabilities without being pre-trained on specially curated multilingual parallel corpora.
We propose a novel detection method, language activation probability entropy (LAPE), to identify language-specific neurons within LLMs.
Our findings indicate that LLMs' proficiency in processing a particular language is predominantly due to a small subset of neurons.
arXiv Detail & Related papers (2024-02-26T09:36:05Z) - Rethinking Interpretability in the Era of Large Language Models [76.1947554386879]
Large language models (LLMs) have demonstrated remarkable capabilities across a wide array of tasks.
The capability to explain in natural language allows LLMs to expand the scale and complexity of patterns that can be given to a human.
These new capabilities raise new challenges, such as hallucinated explanations and immense computational costs.
arXiv Detail & Related papers (2024-01-30T17:38:54Z) - If LLM Is the Wizard, Then Code Is the Wand: A Survey on How Code
Empowers Large Language Models to Serve as Intelligent Agents [81.60906807941188]
Large language models (LLMs) are trained on a combination of natural language and formal language (code)
Code translates high-level goals into executable steps, featuring standard syntax, logical consistency, abstraction, and modularity.
arXiv Detail & Related papers (2024-01-01T16:51:20Z) - The Quo Vadis of the Relationship between Language and Large Language
Models [3.10770247120758]
Large Language Models (LLMs) have come to encourage the adoption of LLMs as scientific models of language.
We identify the most important theoretical and empirical risks brought about by the adoption of scientific models that lack transparency.
We conclude that, at their current stage of development, LLMs hardly offer any explanations for language.
arXiv Detail & Related papers (2023-10-17T10:54:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.