Watermarking and Anomaly Detection in Machine Learning Models for LORA RF Fingerprinting
- URL: http://arxiv.org/abs/2509.15170v2
- Date: Fri, 19 Sep 2025 01:16:33 GMT
- Title: Watermarking and Anomaly Detection in Machine Learning Models for LORA RF Fingerprinting
- Authors: Aarushi Mahajan, Wayne Burleson,
- Abstract summary: We present a stronger RFFI system combining watermarking for ownership proof and anomaly detection for spotting suspicious inputs.<n>On the LoRa dataset, our system achieves 94.6% accuracy, 98% watermark success, and 0.94 AUROC, offering verifiable, tamper-resistant authentication.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Radio frequency fingerprint identification (RFFI) distinguishes wireless devices by the small variations in their analog circuits, avoiding heavy cryptographic authentication. While deep learning on spectrograms improves accuracy, models remain vulnerable to copying, tampering, and evasion. We present a stronger RFFI system combining watermarking for ownership proof and anomaly detection for spotting suspicious inputs. Using a ResNet-34 on log-Mel spectrograms, we embed three watermarks: a simple trigger, an adversarially trained trigger robust to noise and filtering, and a hidden gradient/weight signature. A convolutional Variational Autoencoders (VAE) with Kullback-Leibler (KL) warm-up and free-bits flags off-distribution queries. On the LoRa dataset, our system achieves 94.6% accuracy, 98% watermark success, and 0.94 AUROC, offering verifiable, tamper-resistant authentication.
Related papers
- OSI: One-step Inversion Excels in Extracting Diffusion Watermarks [56.210696479553945]
We propose One-step Inversion (OSI), a significantly faster and more accurate method for extracting Gaussian Shading style watermarks.<n>OSI reformulates watermark extraction as a learnable sign classification problem, which eliminates the need for precise regression of the initial noise.<n>Our OSI substantially outperforms the multi-step diffusion inversion method: it is 20x faster, achieves higher extraction accuracy, and doubles the watermark payload capacity.
arXiv Detail & Related papers (2026-02-10T07:43:16Z) - SSCL-BW: Sample-Specific Clean-Label Backdoor Watermarking for Dataset Ownership Verification [8.045712223215542]
This paper proposes a sample-specific clean-label backdoor watermarking (i.e., SSCL-BW)<n>By training a U-Net-based watermarked sample generator, this method generates unique watermarks for each sample.<n>Experiments on benchmark datasets demonstrate the effectiveness of the proposed method and its robustness against potential watermark removal attacks.
arXiv Detail & Related papers (2025-10-30T12:13:53Z) - An Ensemble Framework for Unbiased Language Model Watermarking [60.99969104552168]
We propose ENS, a novel ensemble framework that enhances the detectability and robustness of unbiased watermarks.<n>ENS sequentially composes multiple independent watermark instances, each governed by a distinct key, to amplify the watermark signal.<n> Empirical evaluations show that ENS substantially reduces the number of tokens needed for reliable detection and increases resistance to smoothing and paraphrasing attacks.
arXiv Detail & Related papers (2025-09-28T19:37:44Z) - StableGuard: Towards Unified Copyright Protection and Tamper Localization in Latent Diffusion Models [55.05404953041403]
We propose a novel framework that seamlessly integrates a binary watermark into the diffusion generation process.<n>We show that StableGuard consistently outperforms state-of-the-art methods in image fidelity, watermark verification, and tampering localization.
arXiv Detail & Related papers (2025-09-22T16:35:19Z) - Semantic Watermarking Reinvented: Enhancing Robustness and Generation Quality with Fourier Integrity [31.666430190864947]
We propose a novel embedding method called Hermitian Symmetric Fourier Watermarking (SFW)<n>SFW maintains frequency integrity by enforcing Hermitian symmetry.<n>We introduce a center-aware embedding strategy that reduces the vulnerability of semantic watermarking due to cropping attacks.
arXiv Detail & Related papers (2025-09-09T12:15:16Z) - Robust Watermarks Leak: Channel-Aware Feature Extraction Enables Adversarial Watermark Manipulation [21.41643665626451]
We propose an attack framework that extracts leakage of watermark patterns using a pre-trained vision model.<n>Unlike prior works requiring massive data or detector access, our method achieves both forgery and detection evasion with a single watermarked image.<n>Our work exposes the robustness-stealthiness paradox: current "robust" watermarks sacrifice security for distortion resistance, providing insights for future watermark design.
arXiv Detail & Related papers (2025-02-10T12:55:08Z) - CLUE-MARK: Watermarking Diffusion Models using CLWE [8.429227679450433]
We introduce CLUE-Mark, the first provably undetectable watermarking scheme for diffusion models.<n> CLUE-Mark requires no changes to the model being watermarked, is computationally efficient, and is guaranteed to have no impact on model output quality.<n>Uniquely, CLUE-Mark cannot be detected nor removed by recent steganographic attacks.
arXiv Detail & Related papers (2024-11-18T10:03:01Z) - Lazy Layers to Make Fine-Tuned Diffusion Models More Traceable [70.77600345240867]
A novel arbitrary-in-arbitrary-out (AIAO) strategy makes watermarks resilient to fine-tuning-based removal.
Unlike the existing methods of designing a backdoor for the input/output space of diffusion models, in our method, we propose to embed the backdoor into the feature space of sampled subpaths.
Our empirical studies on the MS-COCO, AFHQ, LSUN, CUB-200, and DreamBooth datasets confirm the robustness of AIAO.
arXiv Detail & Related papers (2024-05-01T12:03:39Z) - Safe and Robust Watermark Injection with a Single OoD Image [90.71804273115585]
Training a high-performance deep neural network requires large amounts of data and computational resources.
We propose a safe and robust backdoor-based watermark injection technique.
We induce random perturbation of model parameters during watermark injection to defend against common watermark removal attacks.
arXiv Detail & Related papers (2023-09-04T19:58:35Z) - An Unforgeable Publicly Verifiable Watermark for Large Language Models [84.2805275589553]
Current watermark detection algorithms require the secret key used in the watermark generation process, making them susceptible to security breaches and counterfeiting during public detection.
We propose an unforgeable publicly verifiable watermark algorithm named UPV that uses two different neural networks for watermark generation and detection, instead of using the same key at both stages.
arXiv Detail & Related papers (2023-07-30T13:43:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.