Watermarking Decision Tree Ensembles
- URL: http://arxiv.org/abs/2410.04570v1
- Date: Sun, 6 Oct 2024 17:56:13 GMT
- Title: Watermarking Decision Tree Ensembles
- Authors: Stefano Calzavara, Lorenzo Cazzaro, Donald Gera, Salvatore Orlando,
- Abstract summary: We present the first watermarking scheme for decision tree ensembles, focusing in particular on random forest models.
We show excellent results in terms of accuracy and security against the most relevant threats.
- Score: 3.5621555706183896
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Protecting the intellectual property of machine learning models is a hot topic and many watermarking schemes for deep neural networks have been proposed in the literature. Unfortunately, prior work largely neglected the investigation of watermarking techniques for other types of models, including decision tree ensembles, which are a state-of-the-art model for classification tasks on non-perceptual data. In this paper, we present the first watermarking scheme designed for decision tree ensembles, focusing in particular on random forest models. We discuss watermark creation and verification, presenting a thorough security analysis with respect to possible attacks. We finally perform an experimental evaluation of the proposed scheme, showing excellent results in terms of accuracy and security against the most relevant threats.
Related papers
- Watermarking Recommender Systems [52.207721219147814]
We introduce Autoregressive Out-of-distribution Watermarking (AOW), a novel technique tailored specifically for recommender systems.
Our approach entails selecting an initial item and querying it through the oracle model, followed by the selection of subsequent items with small prediction scores.
To assess the efficacy of the watermark, the model is tasked with predicting the subsequent item given a truncated watermark sequence.
arXiv Detail & Related papers (2024-07-17T06:51:24Z) - A Survey of Fragile Model Watermarking [14.517951900805317]
Model fragile watermarking has gradually emerged as a potent tool for detecting tampering.
This paper provides an overview of the relevant work in the field of model fragile watermarking since its inception.
arXiv Detail & Related papers (2024-06-07T10:23:25Z) - Performance-lossless Black-box Model Watermarking [69.22653003059031]
We propose a branch backdoor-based model watermarking protocol to protect model intellectual property.
In addition, we analyze the potential threats to the protocol and provide a secure and feasible watermarking instance for language models.
arXiv Detail & Related papers (2023-12-11T16:14:04Z) - SoK: How Robust is Image Classification Deep Neural Network
Watermarking? (Extended Version) [16.708069984516964]
We evaluate whether recently proposed watermarking schemes that claim robustness are robust against a large set of removal attacks.
None of the surveyed watermarking schemes is robust in practice datasets.
We show that watermarking schemes need to be evaluated against a more extensive set of removal attacks with a more realistic adversary model.
arXiv Detail & Related papers (2021-08-11T00:23:33Z) - Exploring Structure Consistency for Deep Model Watermarking [122.38456787761497]
The intellectual property (IP) of Deep neural networks (DNNs) can be easily stolen'' by surrogate model attack.
We propose a new watermarking methodology, namely structure consistency'', based on which a new deep structure-aligned model watermarking algorithm is designed.
arXiv Detail & Related papers (2021-08-05T04:27:15Z) - Evaluating the Robustness of Trigger Set-Based Watermarks Embedded in
Deep Neural Networks [22.614495877481144]
State-of-the-art trigger set-based watermarking algorithms do not achieve their designed goal of proving ownership.
We propose novel adaptive attacks that harness the adversary's knowledge of the underlying watermarking algorithm of a target model.
arXiv Detail & Related papers (2021-06-18T14:23:55Z) - Reversible Watermarking in Deep Convolutional Neural Networks for
Integrity Authentication [78.165255859254]
We propose a reversible watermarking algorithm for integrity authentication.
The influence of embedding reversible watermarking on the classification performance is less than 0.5%.
At the same time, the integrity of the model can be verified by applying the reversible watermarking.
arXiv Detail & Related papers (2021-04-09T09:32:21Z) - Deep Model Intellectual Property Protection via Deep Watermarking [122.87871873450014]
Deep neural networks are exposed to serious IP infringement risks.
Given a target deep model, if the attacker knows its full information, it can be easily stolen by fine-tuning.
We propose a new model watermarking framework for protecting deep networks trained for low-level computer vision or image processing tasks.
arXiv Detail & Related papers (2021-03-08T18:58:21Z) - A Systematic Review on Model Watermarking for Neural Networks [1.2691047660244335]
This work presents a taxonomy identifying and analyzing different classes of watermarking schemes for machine learning models.
It introduces a unified threat model to allow structured reasoning on and comparison of the effectiveness of watermarking methods.
It systematizes desired security requirements and attacks against ML model watermarking.
arXiv Detail & Related papers (2020-09-25T12:03:02Z) - Model Watermarking for Image Processing Networks [120.918532981871]
How to protect the intellectual property of deep models is a very important but seriously under-researched problem.
We propose the first model watermarking framework for protecting image processing models.
arXiv Detail & Related papers (2020-02-25T18:36:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.