Physical Layer Deception based on Semantic Distortion
- URL: http://arxiv.org/abs/2510.15063v2
- Date: Mon, 20 Oct 2025 06:12:05 GMT
- Title: Physical Layer Deception based on Semantic Distortion
- Authors: Wenwen Chen, Bin Han, Yao Zhu, Anke Schmeink, Giuseppe Caire, Hans D. Schotten,
- Abstract summary: Physical layer deception (PLD) is a framework that integrates physical layer security (PLS) with deception techniques.<n>We extend this framework to a semantic communication model and conduct a theoretical analysis using semantic distortion as the performance metric.
- Score: 58.38604209714828
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Physical layer deception (PLD) is a framework we previously introduced that integrates physical layer security (PLS) with deception techniques, enabling proactive countermeasures against eavesdropping rather than relying solely on passive defense. We extend this framework to a semantic communication model and conduct a theoretical analysis using semantic distortion as the performance metric. In this work, we further investigate the receiver's selection of decryption strategies and the transmitter's optimization of encryption strategies. By anticipating the decryption strategy likely to be employed by the legitimate receiver and eavesdropper, the transmitter can optimize resource allocation and encryption parameters, thereby maximizing the semantic distortion at the eavesdropper while maintaining a low level of semantic distortion for the legitimate receiver. We present a rigorous analysis of the resulting optimization problem, propose an efficient optimization algorithm, and derive closed-form optimal solutions for multiple scenarios. Finally, we corroborate the theoretical findings with numerical simulations, which also confirm the practicality of the proposed algorithm.
Related papers
- Multi-Objective Reward and Preference Optimization: Theory and Algorithms [3.316593788543852]
This thesis develops theoretical frameworks and algorithms that advance constrained reinforcement learning (RL) across control, preference learning, and alignment of large language models.<n>ACPO, e-COP, warmPref-PS, PSPL, and MOPO advance RL across average-cost, episodic, and preference-driven paradigms.<n> Collectively, the thesis unifies RL across average-cost, episodic, and preference-driven paradigms, delivering theoretical advances and practical tools for safe and aligned decision-making.
arXiv Detail & Related papers (2025-12-11T12:51:21Z) - Deep Unfolding: Recent Developments, Theory, and Design Guidelines [99.63555420898554]
This article provides a tutorial-style overview of deep unfolding, a framework that transforms optimization algorithms into structured, trainable ML architectures.<n>We review the foundations of optimization for inference and for learning, introduce four representative design paradigms for deep unfolding, and discuss the distinctive training schemes that arise from their iterative nature.
arXiv Detail & Related papers (2025-12-03T13:16:35Z) - Latent Chain-of-Thought for Visual Reasoning [53.541579327424046]
Chain-of-thought (CoT) reasoning is critical for improving the interpretability and reliability of Large Vision-Language Models (LVLMs)<n>We reformulate reasoning in LVLMs as posterior inference and propose a scalable training algorithm based on amortized variational inference.<n>We empirically demonstrate that the proposed method enhances the state-of-the-art LVLMs on seven reasoning benchmarks.
arXiv Detail & Related papers (2025-10-27T23:10:06Z) - Advancing Practical Homomorphic Encryption for Federated Learning: Theoretical Guarantees and Efficiency Optimizations [1.8245990984484237]
Federated Learning (FL) enables collaborative model training while preserving data privacy by keeping raw data locally stored on client devices.<n>Recent studies reveal that sharing model gradients creates vulnerability to Model Inversion Attacks.<n>This paper presents a framework for theoretical analysis of the underlying principles of selective encryption as a defense against model inversion attacks.
arXiv Detail & Related papers (2025-09-24T18:43:52Z) - Evaluating Selective Encryption Against Gradient Inversion Attacks [15.000605214632243]
Gradient inversion attacks pose significant privacy threats to distributed training frameworks such as federated learning.<n>This paper systematically evaluates selective encryption methods with different significance metrics against state-of-the-art attacks.
arXiv Detail & Related papers (2025-08-06T07:31:43Z) - CovertAuth: Joint Covert Communication and Authentication in MmWave Systems [23.84881074442097]
Beam alignment (BA) is a crucial process in millimeter-wave (mmWave) communications.<n>BA is particularly vulnerable to eavesdropping and identity impersonation attacks.<n>This paper proposes a novel secure framework named CovertAuth to enhance the security of the BA phase against such attacks.
arXiv Detail & Related papers (2025-07-11T09:19:25Z) - On Symmetric Losses for Robust Policy Optimization with Noisy Preferences [55.8615920580824]
This work focuses on reward modeling, a core component in reinforcement learning from human feedback.<n>We propose a principled framework for robust policy optimization under noisy preferences.<n>We prove that symmetric losses enable successful policy optimization even under noisy labels.
arXiv Detail & Related papers (2025-05-30T15:30:43Z) - A Theoretical Perspective for Speculative Decoding Algorithm [60.79447486066416]
One effective way to accelerate inference is emphSpeculative Decoding, which employs a small model to sample a sequence of draft tokens and a large model to validate.
This paper tackles this gap by conceptualizing the decoding problem via markov chain abstraction and studying the key properties, emphoutput quality and inference acceleration, from a theoretical perspective.
arXiv Detail & Related papers (2024-10-30T01:53:04Z) - Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Implicitly an Adversarial Regularizer [52.09480867526656]
We identify the source of misalignment as a form of distributional shift and uncertainty in learning human preferences.<n>To mitigate overoptimization, we first propose a theoretical algorithm that chooses the best policy for an adversarially chosen reward model.<n>Using the equivalence between reward models and the corresponding optimal policy, the algorithm features a simple objective that combines a preference optimization loss and a supervised learning loss.
arXiv Detail & Related papers (2024-05-26T05:38:50Z) - Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-Constraint [56.74058752955209]
This paper studies the alignment process of generative models with Reinforcement Learning from Human Feedback (RLHF)
We first identify the primary challenges of existing popular methods like offline PPO and offline DPO as lacking in strategical exploration of the environment.
We propose efficient algorithms with finite-sample theoretical guarantees.
arXiv Detail & Related papers (2023-12-18T18:58:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.