Switching multiplicative watermark design against covert attacks
- URL: http://arxiv.org/abs/2502.18948v1
- Date: Wed, 26 Feb 2025 08:56:56 GMT
- Title: Switching multiplicative watermark design against covert attacks
- Authors: Alexander J. Gallo, Sribalaji C. Anand, André M. H. Teixeira, Riccardo M. G. Ferrari,
- Abstract summary: We propose an optimal design strategy to define switching filter parameters.<n>A worst-case scenario of a matched covert attack is assumed.<n>Our algorithm, given watermark filter parameters at some time instant, provides optimal next-step parameters.
- Score: 40.2428948628001
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Active techniques have been introduced to give better detectability performance for cyber-attack diagnosis in cyber-physical systems (CPS). In this paper, switching multiplicative watermarking is considered, whereby we propose an optimal design strategy to define switching filter parameters. Optimality is evaluated exploiting the so-called output-to-output gain of the closed loop system, including some supposed attack dynamics. A worst-case scenario of a matched covert attack is assumed, presuming that an attacker with full knowledge of the closed-loop system injects a stealthy attack of bounded energy. Our algorithm, given watermark filter parameters at some time instant, provides optimal next-step parameters. Analysis of the algorithm is given, demonstrating its features, and demonstrating that through initialization of certain parameters outside of the algorithm, the parameters of the multiplicative watermarking can be randomized. Simulation shows how, by adopting our method for parameter design, the attacker's impact on performance diminishes.
Related papers
- Optimizing Adaptive Attacks against Content Watermarks for Language Models [5.798432964668272]
Large Language Models (LLMs) can be emphmisused to spread online spam and misinformation.
Content watermarking deters misuse by hiding a message in model-generated outputs, enabling their detection using a secret watermarking key.
arXiv Detail & Related papers (2024-10-03T12:37:39Z) - Towards Better Statistical Understanding of Watermarking LLMs [7.68488211412916]
In this paper, we study the problem of watermarking large language models (LLMs)
We consider the trade-off between model distortion and detection ability and it as a constrained optimization problem based on the green-red list of Kirchenbauer et al.
We develop an online dual gradient ascent watermarking algorithm in light of this optimization formulation and prove its optimality between model distortion and detection ability.
arXiv Detail & Related papers (2024-03-19T01:57:09Z) - Hybrid Design of Multiplicative Watermarking for Defense Against Malicious Parameter Identification [46.27328641616778]
We propose a hybrid multiplicative watermarking scheme, where the watermark parameters are periodically updated.
We show that the proposed approach makes it difficult for an eavesdropper to reconstruct the watermarking parameters.
arXiv Detail & Related papers (2023-09-05T16:56:53Z) - Online Continuous Hyperparameter Optimization for Generalized Linear Contextual Bandits [55.03293214439741]
In contextual bandits, an agent sequentially makes actions from a time-dependent action set based on past experience.
We propose the first online continuous hyperparameter tuning framework for contextual bandits.
We show that it could achieve a sublinear regret in theory and performs consistently better than all existing methods on both synthetic and real datasets.
arXiv Detail & Related papers (2023-02-18T23:31:20Z) - Learning Linearized Assignment Flows for Image Labeling [70.540936204654]
We introduce a novel algorithm for estimating optimal parameters of linearized assignment flows for image labeling.
We show how to efficiently evaluate this formula using a Krylov subspace and a low-rank approximation.
arXiv Detail & Related papers (2021-08-02T13:38:09Z) - Regularization Can Help Mitigate Poisoning Attacks... with the Right
Hyperparameters [1.8570591025615453]
Machine learning algorithms are vulnerable to poisoning attacks, where a fraction of the training data is manipulated to degrade the algorithms' performance.
We show that current approaches, which typically assume that regularization hyper parameters remain constant, lead to an overly pessimistic view of the algorithms' robustness.
We propose a novel optimal attack formulation that considers the effect of the attack on the hyper parameters, modelling the attack as a emphminimax bilevel optimization problem.
arXiv Detail & Related papers (2021-05-23T14:34:47Z) - Targeted Attack against Deep Neural Networks via Flipping Limited Weight
Bits [55.740716446995805]
We study a novel attack paradigm, which modifies model parameters in the deployment stage for malicious purposes.
Our goal is to misclassify a specific sample into a target class without any sample modification.
By utilizing the latest technique in integer programming, we equivalently reformulate this BIP problem as a continuous optimization problem.
arXiv Detail & Related papers (2021-02-21T03:13:27Z) - Adversarial Robustness by Design through Analog Computing and Synthetic
Gradients [80.60080084042666]
We propose a new defense mechanism against adversarial attacks inspired by an optical co-processor.
In the white-box setting, our defense works by obfuscating the parameters of the random projection.
We find the combination of a random projection and binarization in the optical system also improves robustness against various types of black-box attacks.
arXiv Detail & Related papers (2021-01-06T16:15:29Z) - Regularisation Can Mitigate Poisoning Attacks: A Novel Analysis Based on
Multiobjective Bilevel Optimisation [3.3181276611945263]
Machine Learning (ML) algorithms are vulnerable to poisoning attacks, where a fraction of the training data is manipulated to deliberately degrade the algorithms' performance.
Optimal poisoning attacks, which can be formulated as bilevel problems, help to assess the robustness of learning algorithms in worst-case scenarios.
We show that this approach leads to an overly pessimistic view of the robustness of the algorithms.
arXiv Detail & Related papers (2020-02-28T19:46:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.