Enhanced Outsourced and Secure Inference for Tall Sparse Decision Trees
- URL: http://arxiv.org/abs/2505.02224v1
- Date: Sun, 04 May 2025 19:15:27 GMT
- Title: Enhanced Outsourced and Secure Inference for Tall Sparse Decision Trees
- Authors: Andrew Quijano, Spyros T. Halkidis, Kevin Gallagher, Kemal Akkaya, Nikolaos Samaras,
- Abstract summary: A decision tree is an easy-to-understand tool that has been widely used for classification tasks.<n>Data owners are keen to reduce risk by outsourcing their model, but want security guarantees that third parties cannot steal their decision tree model.<n>We propose a new decision tree inference protocol in which the model is shared and evaluated among multiple entities.
- Score: 3.24708405883535
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A decision tree is an easy-to-understand tool that has been widely used for classification tasks. On the one hand, due to privacy concerns, there has been an urgent need to create privacy-preserving classifiers that conceal the user's input from the classifier. On the other hand, with the rise of cloud computing, data owners are keen to reduce risk by outsourcing their model, but want security guarantees that third parties cannot steal their decision tree model. To address these issues, Joye and Salehi introduced a theoretical protocol that efficiently evaluates decision trees while maintaining privacy by leveraging their comparison protocol that is resistant to timing attacks. However, their approach was not only inefficient but also prone to side-channel attacks. Therefore, in this paper, we propose a new decision tree inference protocol in which the model is shared and evaluated among multiple entities. We partition our decision tree model by each level to be stored in a new entity we refer to as a "level-site." Utilizing this approach, we were able to gain improved average run time for classifier evaluation for a non-complete tree, while also having strong mitigations against side-channel attacks.
Related papers
- FedGA-Tree: Federated Decision Tree using Genetic Algorithm [11.955062839855334]
We introduce Genetic Algorithm to facilitate the construction of personalized decision trees.<n>Our method surpasses decision trees trained solely on local data and a benchmark algorithm.
arXiv Detail & Related papers (2025-06-09T19:39:22Z) - EvalTree: Profiling Language Model Weaknesses via Hierarchical Capability Trees [69.96560215277285]
We develop a weakness profiling method for language model evaluations.<n>EvalTree identifies weaknesses more precisely and comprehensively.<n>We show how EvalTree exposes flaws in Arenas human-based evaluation practice.
arXiv Detail & Related papers (2025-03-11T21:12:48Z) - Learning Deep Tree-based Retriever for Efficient Recommendation: Theory and Method [76.31185707649227]
We propose a Deep Tree-based Retriever (DTR) for efficient recommendation.
DTR frames the training task as a softmax-based multi-class classification over tree nodes at the same level.
To mitigate the suboptimality induced by the labeling of non-leaf nodes, we propose a rectification method for the loss function.
arXiv Detail & Related papers (2024-08-21T05:09:53Z) - Reinforcement Learning for Node Selection in Branch-and-Bound [52.2648997215667]
Current state-of-the-art selectors utilize either hand-crafted ensembles that automatically switch between naive sub-node selectors, or learned node selectors that rely on individual node data.
We propose a novel simulation technique that uses reinforcement learning (RL) while considering the entire tree state, rather than just isolated nodes.
arXiv Detail & Related papers (2023-09-29T19:55:56Z) - Differentially-Private Decision Trees and Provable Robustness to Data
Poisoning [8.649768969060647]
Decision trees are interpretable models that are well-suited to non-linear learning problems.
Current state-of-the-art algorithms for this purpose sacrifice much utility for a small privacy benefit.
We propose PrivaTree based on private histograms that chooses good splits while consuming a small privacy budget.
arXiv Detail & Related papers (2023-05-24T17:56:18Z) - Tree-Values: selective inference for regression trees [0.0]
A naive approach to inference that does not account for the fact that the tree was estimated from the data will not achieve standard guarantees.
We propose a selective inference framework for conducting inference on a fitted CART tree.
arXiv Detail & Related papers (2021-06-15T00:25:11Z) - Fed-EINI: An Efficient and Interpretable Inference Framework for
Decision Tree Ensembles in Federated Learning [11.843365055516566]
Fed-EINI is an efficient and interpretable inference framework for federated decision tree models.
We propose to protect the decision path by the efficient additively homomorphic encryption method.
Experiments show that the inference efficiency is improved by over $50%$ in average.
arXiv Detail & Related papers (2021-05-20T06:40:05Z) - Growing Deep Forests Efficiently with Soft Routing and Learned
Connectivity [79.83903179393164]
This paper further extends the deep forest idea in several important aspects.
We employ a probabilistic tree whose nodes make probabilistic routing decisions, a.k.a., soft routing, rather than hard binary decisions.
Experiments on the MNIST dataset demonstrate that our empowered deep forests can achieve better or comparable performance than [1],[3].
arXiv Detail & Related papers (2020-12-29T18:05:05Z) - Rectified Decision Trees: Exploring the Landscape of Interpretable and
Effective Machine Learning [66.01622034708319]
We propose a knowledge distillation based decision trees extension, dubbed rectified decision trees (ReDT)
We extend the splitting criteria and the ending condition of the standard decision trees, which allows training with soft labels.
We then train the ReDT based on the soft label distilled from a well-trained teacher model through a novel jackknife-based method.
arXiv Detail & Related papers (2020-08-21T10:45:25Z) - Privacy Preserving Vertical Federated Learning for Tree-based Models [30.808567035503994]
Federated learning enables multiple organizations to jointly train a model without revealing their private data to each other.
We propose Pivot, a novel solution for privacy preserving vertical decision tree training and prediction.
arXiv Detail & Related papers (2020-08-14T02:32:36Z) - MurTree: Optimal Classification Trees via Dynamic Programming and Search [61.817059565926336]
We present a novel algorithm for learning optimal classification trees based on dynamic programming and search.
Our approach uses only a fraction of the time required by the state-of-the-art and can handle datasets with tens of thousands of instances.
arXiv Detail & Related papers (2020-07-24T17:06:55Z) - A general framework for defining and optimizing robustness [74.67016173858497]
We propose a rigorous and flexible framework for defining different types of robustness properties for classifiers.
Our concept is based on postulates that robustness of a classifier should be considered as a property that is independent of accuracy.
We develop a very general robustness framework that is applicable to any type of classification model.
arXiv Detail & Related papers (2020-06-19T13:24:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.