Inverse Language Modeling towards Robust and Grounded LLMs
- URL: http://arxiv.org/abs/2510.01929v1
- Date: Thu, 02 Oct 2025 11:47:18 GMT
- Title: Inverse Language Modeling towards Robust and Grounded LLMs
- Authors: Davide Gabrielli, Simone Sestito, Iacopo Masi,
- Abstract summary: We propose Inverse Language Modeling (ILM), a unified framework that improves the robustness of LLMs to input perturbations.<n>ILM transforms LLMs from static generators into analyzable and robust systems.<n>ILM can lay the foundation for next-generation LLMs that are not only robust and grounded but also fundamentally more controllable.
- Score: 3.3072037841206345
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The current landscape of defensive mechanisms for LLMs is fragmented and underdeveloped, unlike prior work on classifiers. To further promote adversarial robustness in LLMs, we propose Inverse Language Modeling (ILM), a unified framework that simultaneously 1) improves the robustness of LLMs to input perturbations, and, at the same time, 2) enables native grounding by inverting model outputs to identify potentially toxic or unsafe input triggers. ILM transforms LLMs from static generators into analyzable and robust systems, potentially helping RED teaming. ILM can lay the foundation for next-generation LLMs that are not only robust and grounded but also fundamentally more controllable and trustworthy. The code is publicly available at github.com/davegabe/pag-llm.
Related papers
- DLM-Scope: Mechanistic Interpretability of Diffusion Language Models via Sparse Autoencoders [73.18745837755758]
We present DLM-Scope, the first SAE-based interpretability framework for diffusion language models.<n>We show that trained Top-K SAEs can faithfully extract interpretable features.<n>We also show a great potential of applying SAEs to DLM-related tasks and algorithms.
arXiv Detail & Related papers (2026-02-05T16:41:25Z) - From Insight to Exploit: Leveraging LLM Collaboration for Adaptive Adversarial Text Generation [3.75886080255807]
We introduce two innovative attack frameworks designed to generate dynamic and adaptive adversarial examples.<n>We produce subtle and natural-looking adversarial inputs that preserve semantic similarity to the original text.<n>Our attacks evolve with the advancements in LLMs and demonstrate strong transferability acrossversa unknown to the attacker.
arXiv Detail & Related papers (2025-11-05T02:27:56Z) - LLM-Lasso: A Robust Framework for Domain-Informed Feature Selection and Regularization [59.75242204923353]
We introduce LLM-Lasso, a framework that leverages large language models (LLMs) to guide feature selection in Lasso regression.<n>LLMs generate penalty factors for each feature, which are converted into weights for the Lasso penalty using a simple, tunable model.<n>Features identified as more relevant by the LLM receive lower penalties, increasing their likelihood of being retained in the final model.
arXiv Detail & Related papers (2025-02-15T02:55:22Z) - Transducer-Llama: Integrating LLMs into Streamable Transducer-based Speech Recognition [26.79555533538622]
This paper proposes a novel model architecture, Transducer-Llama, that integrates large language models (LLMs) into a Factorized Transducer (FT) model.<n>The proposed streaming Transducer-Llama approach gave a 17% relative WER reduction (WERR) over a strong FT baseline and a 32% WERR over an RNN-T baseline.
arXiv Detail & Related papers (2024-12-21T03:35:49Z) - Uncertainty-Aware Hybrid Inference with On-Device Small and Remote Large Language Models [49.48313161005423]
A hybrid language model (HLM) architecture integrates a small language model (SLM) operating on a mobile device with a large language model (LLM) hosted at the base station (BS) of a wireless network.<n>The HLM token generation process follows the speculative inference principle: the SLM's vocabulary distribution is uploaded to the LLM, which either accepts or rejects it, with rejected tokens being resampled by the LLM.<n>We propose a novel HLM structure coined Uncertainty-aware opportunistic HLM (U-HLM), wherein the SLM locally measures its output uncertainty and skips both up
arXiv Detail & Related papers (2024-12-17T09:08:18Z) - DALD: Improving Logits-based Detector without Logits from Black-box LLMs [56.234109491884126]
Large Language Models (LLMs) have revolutionized text generation, producing outputs that closely mimic human writing.
We present Distribution-Aligned LLMs Detection (DALD), an innovative framework that redefines the state-of-the-art performance in black-box text detection.
DALD is designed to align the surrogate model's distribution with that of unknown target LLMs, ensuring enhanced detection capability and resilience against rapid model iterations.
arXiv Detail & Related papers (2024-06-07T19:38:05Z) - SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks [9.958467179573237]
Large language models (LLMs) have proven to be highly effective across various natural language processing tasks.<n>Existing methods often struggle to achieve substantial end-to-end LLM inference speedup.<n>We introduce SLEB, a novel approach designed to streamline LLMs by eliminating redundant transformer blocks.
arXiv Detail & Related papers (2024-02-14T09:01:13Z) - LLMs Can't Plan, But Can Help Planning in LLM-Modulo Frameworks [18.068035947969044]
There is considerable confusion about the role of Large Language Models (LLMs) in planning and reasoning tasks.
We argue that auto-regressive LLMs cannot, by themselves, do planning or self-verification.
We present a vision of bf LLM-Modulo Frameworks that combine the strengths of LLMs with external model-based verifiers.
arXiv Detail & Related papers (2024-02-02T14:43:18Z) - Knowledge Fusion of Large Language Models [73.28202188100646]
This paper introduces the notion of knowledge fusion for large language models (LLMs)
We externalize their collective knowledge and unique strengths, thereby elevating the capabilities of the target model beyond those of any individual source LLM.
Our findings confirm that the fusion of LLMs can improve the performance of the target model across a range of capabilities such as reasoning, commonsense, and code generation.
arXiv Detail & Related papers (2024-01-19T05:02:46Z) - Red Teaming Language Model Detectors with Language Models [114.36392560711022]
Large language models (LLMs) present significant safety and ethical risks if exploited by malicious users.
Recent works have proposed algorithms to detect LLM-generated text and protect LLMs.
We study two types of attack strategies: 1) replacing certain words in an LLM's output with their synonyms given the context; 2) automatically searching for an instructional prompt to alter the writing style of the generation.
arXiv Detail & Related papers (2023-05-31T10:08:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.